We present different topics of our research every Wednesday at 11:30 am in room 105, Obermarkt 17. For news about the EAD-Lunch talks and seminars please feel free to subscribe to EAD-Public@googlegroups.com. (Register here: https://groups.google.com/forum/?hl=de&fromgroups#!forum/ead-public)
Virtualization in cloud computing already enables the efficient utilization of existing compute infrastructure. The efficient utilization of virtual instances however, which involves selecting the right instance type for a specific application, is still an ongoing research topic. To support the selection of the instance type, resource demand estimation helps to narrow down the amount of resources necessary to execute an application. This is possible for specific application types already, but not for arbitrary applications. We propose that application classification can help to select the proper resource demand estimation strategy for an application. Application classification has been widely studied for networking applications using attributes like network traffic and packet size distributions. However, only a few papers concerned with other performance metrics like CPU-utilization and memory-usage, and application specific metrics like input- and output-data size and application configuration changes can be found. We analyze the existing research and discuss recent advances but also remaining challenges which are especially important in cloud environments. Solutions to overcome those obstacles are presented.
The process of agile software development can be understood as continuous self-optimization process of the development team. With this focus the considerations behind concepts as adaptive case management and lean startup are realized in the domain of software development. The presentation introduced the integration framework openDIP, which allows the development team to cross-link software development tools and other systems efficiently and also enables to describe own processes with scripts and to adjust them continuously. Beyond that the talk motivates how to apply the openDIP concepts to other domains and to utilize them to the agile adaption of business processes.
The work on openDIP has been realized in close cooperation of the Saxonia Systems AG with the Department of Computer Science of the University of Applied Sciences Zittau/Görlitz. First considerations have been presented at the IEEE SICE International Symposium on System Integration in Kobe/Japan in December 2013. Currently the efficient implementation of development and work interfaces for the knowledge worker are realized and generalized for domains beyond software development. A core idea is the provision of an functional operating system for specialty departments (Betriebssystem für Fachabteilungen - "fachliches Betriebssystem") by the IT department. On this basis the personnel in the specialty department is enabled by openDIP to act actively as specialty developer (Fachentwickler), developing process supporting applications supported by IT specialists.
Despite recent developments and privacy issues, cloud computing is still an important technology to store and process large amounts of data. There are many different professional cloud service providers (e.g. Amazon, Google, Oracle, etc.), but concerning privacy issues and interests in particular the concept of a private cloud is an interesting solution. In the talk the evaluation of two different cloud stacks is discussed. In particular the efficiency and automation of the installation process are addressed. In a Java Prototype, a solution to some of those issues concerning, e.g., job automation is demonstrated.
Today, there are many heterogeneous energy production systems that work in common, but have no shared strategy for maximizing efficiency. We present a software prototype for optimizing energy systems in multi-family houses focused on co-generation units (Blockheizkraftwerke). By introducing a central software component we aim to improve the overall communication between multiple energy systems. Our approach is to provide a Python-based development environment and monitoring tool. We use forecasting algorithms to predict the systems behaviour in order to find the most efficient settings.
The talk considers the latest developments in the field. Why Duolingo not only teaches languages, why programs do not have to be coded and how Web-Science changes the development of applications. In his presentation Prof. Gaedke focuses on a few of those aspects and shows current trends in Web Engineering.
With OpenDIP a new way of software development with strict separation of developer roles is introduced. The main user is a knowledge worker, who will be able to use the platform or create apps without the need to fully understand the data sources or used algorithms. With the usage of intelligent model mapping he does not even have to deal with complex data. Instead the data from all sources is mapped (by functions) to generalized data structures which are easy to manipulate. The functions themselves are simple data processing programs which can, e.g., concerning their complexity, be compared to UNIX command line tools like sed or grep. They are stored in repositories that can be located anywhere. Also a Store is provided to request or buy Functions from a development community.
The field of Ambient Assisted Living (AAL) is becoming more important because of the current demographic development and the ageing society. To allow elderly people a more autonomous life in their own home, different tools are developed for that purpose. The talk considers the question on how to assist users to choose the most useful tools for their specific needs. This is discussed in two ways. First, in particular the topic of optimized usability is addressed, which includes the design of a user interface specifically for the target group of elderly people. Second, the decision support itself and adequate techniques thereof are analyzed. The latter considers the application of approaches which are known from expert systems.
In the field of metaheuristic algorithms, experimentation is probably the most important tool to assess and compare the performance of different algorithms. However, most studies limit themselves to presenting the means and standard deviations of the final benchmark results. The objective of this project is to create a web-based application that can provide deeper insights into an algorithm’s behavior and more holistic comparisons with other algorithms. The core concept is to store all data on a web-server and run experiments on a Linux-Cluster. Each experiment will use the same amount of nodes, which will provide comparable solutions for every experiment done. The analysis of the created outputs are generated in a PDF or XHTML report, containing diagrams and plots with evaluation and comparison of the tested algorithms. Not only will these concepts enable any user to compare all kinds of algorithms without comparing the machines the other algorithms were tested on, but also analyze any algorithm’s progress over time.
Since Edward Snowdens disclosures about NSA surveillance programs theinterests on privacy preserving data mining grows a lot. Because ofthat, many companies climb on the bandwagon and offer more or lessgood solutions. But the main research in this field started around theyear 2000 with many follow-up papers until now. In the researchproject CoPPDA (Corporate Privacy Preserving Data Analysis) we usethis theoretical foundations to build up services for real life usecases. In this talk we show how to build up a decision tree with theID3 algorithm using the Gini-index and the Paillier cryptosystem.
Scheduling against due-dates is a recurring and a general scheduling problem which the production and transportation industries have to tackle with. Continuing with our research in this field, in this talk we discuss one such problem of Common Due-Date (CDD) scheduling on a single machine. We present and prove a novel property for CDD along with a linearly bound exact algorithm for a given job sequence on a single and parallel machines. Henceforth, we discuss another important new property for the non-restrictive case of the CDD with controllable processing times and provide an O(n^2) algorithm for a given job sequence.