We present different topics of our research every Wednesday at 11:30 am in room 105, Obermarkt 17. For news about the EAD-Lunch talks and seminars please feel free to subscribe to EAD-Public@googlegroups.com. (Register here: https://groups.google.com/forum/?hl=de&fromgroups#!forum/ead-public)
Technical assistance systems and Ambient Assisted Living have the potential to enable older people to stay and live independently in their familiar living environment, which essentially reflects a higher quality of life for them. Unfortunately most people don’t know about these options or how to get help for financing them. This talk will give a general introduction to the VATI project, which targets the mentioned deficiency. Therefore it will outline the project motivation, it’s ideas and goals as well as present the current state of the VATI technology navigator, an interactive web-based platform on which older people and their relatives can find easily accessible all necessary informations. At the end we will examine the further directions and open an discussion about the open development steps and infrastructural and setup-related questions.
Dealing with information overload has become a side effect of living in a digital world. Many commonly used tools for data access and data search are optimized for a very specific environment, like online search machines for web-based content or SQL queries for data stored in relational database management systems. A limited application area of such techniques can bring poor and inconsistent search results, as soon as the seek covers data sources with different structure and semantics. This is often the case of social exchange platforms for large enterprises where information is being stored in wide range of archives in form: databases, files or web. The newly started NXTM project strongly focuses on some aspects of the above problems. The talk will give an insight into project motivation, its ideas and goals that are to be achieved in the next months.
Starting a software development project is often harder as it should be. Detailed requirements are not there at the beginning, or they change very often. Especially in a research context, these problems could retard a long time or in the worst case till the end of the project. Many of us start with thinking about the overall design, which technologies to use and similar questions. That is OK, but often there is the try to get the project perfect, whch is not possible without concrete requirements. It is a chicken-and-egg problem, and the bottom line is that the project can get stuck already on this first stage. This talk will show the experiences made on developing in my Master project and a possible solution for the problem mentioned above - how to let tests drive the requirements and in the end also the overall software design. In the presented study this approach leads to a heavy production speedup and a well tested project code base. Along the way the talk will give an insight and a forecast on the architecture of CoPPDA and some nice information about Netty, an asynchronous networking library for Java.
The cloud service model has essentially changed the way computing services are planned, built, delivered and utilized. As cloud computing becomes ubiquitous, the need of improved processes for provisioning and management of cloud services is increasing rapidly. In this context, a service level agreement (SLA) between a service provider and a service consumer plays a vital role since it documents all the details related to expected/promised quality levels of a cloud service. In this talk, few use cases are discussed to elaborate the necessity of continuous monitoring of SLAs with respect to cost effectiveness and fluent functioning of business activities. SLA-based monitoring of cloud services might be an easy task in simple cases where a single user is using cloud services but it becomes complex for such cases where different branches of one big organization are using multiple cloud services and any discontinuation of one of the crucial services may not only affect smooth functioning of a local branch but also others. Such setup becomes an increasingly common situation of continuous distributed monitoring where a number of observers are making observations in different locations and wish to communicate their observations to a central coordinator for analysis. Different strategies for continuous distributed monitoring are presented in this talk. A framework is being implemented which not only converts and stores SLAs electronically but also monitors cloud services for distributed setups. An implementation of continuous distributed monitoring for Amazon's cloud storage service S3 is also presented in this talk.
GPGPU (General Purpose Computing on Graphic Processing Units) offers a promising solution to improve the runtime and quality of scientific computations on commodity hardware. In this work we develop highly parallel approaches of SA (Simulated Annealing) and DPSO (Discrete Particle Swarm Optimization) to solve the CDD (Common Due Date Problem) and the UCDDCP (Unrestricted Common Due Date Problem with Controllable Processing Times) on a graphic processing unit. For both scheduling problems, we present linear algorithms to optimize a given sequence. To obtain an optimal job sequence for both problems we implement parallel SA and DPSO algorithms on the GPU. We describe the compute unified device architecture, its memory model and its execution model to create effective solutions. Significant speedups were achieved compared to CPU implementations.