Vienna University of Technology, Austria
Formal Methods for Monitoring and Synthesis of Spatio-Temporal Properties
Networked dynamical systems are increasingly used as models for a
variety of processes ranging from robotic teams to collections of
genetically engineered living cells. As the complexity of these systems
increases, so does the range of emergent properties that they exhibit.
In pattern recognition and machine learning, spatio-temporal properties
are usually classified according statistical descriptors (or features).
This approach despite its success and popularity lacks of a rigourous
foundation to specify such patterns and to reason about them in a
systematic way. On the other end, formal methods provide logic-based
languages with a well-defined syntax and semantics to specify in a
precise and concise way emergent behaviours and the necessary techniques
to automatically detect them. Formal analysis is however challenging
due to the difficulty of defining spatio-temporal properties in a
suitable formal language. In this talk. we will present some recent
results on novel formal logic-based languages and methods to monitor and
synthetize spatio-temporal properties in networked dynamical systems.
Faculty of Computer Science, University of Vienna
Programming Support for Future Parallel Architectures
Due to physical constraints the performance of single processors has reached its limits, and all major hardware
vendors switched to multi-core architectures. In addition, there is a trend towards heterogeneous parallel
systems comprised of conventional multi-core CPUs, GPUs, and other types of accelerators, mainly because such
system promise a better performance/power ratio. As a consequence, the development of applications that can
exploit the full potential of emerging parallel systems is becoming more and more challenging. In this talk we
outline the major challenges associated with software development for future parallel architectures, including
higher-level parallel programming models, advanced runtime technology, and automatic performance tuning tools.
In particular we will present a high-level compositional approach to parallel software development in concert
with an intelligent runtime system and automatic performance tuning techniques. Such an approach can
significantly enhance programmability of future parallel systems, while ensuring efficiency and performance
portability across a range of different architectures. We report on recent research results of two European
projects, PEPPHER and AUTOTUNE, which addressed the challenges of software development for current and emerging
parallel systems, and discuss related research efforts and potential future directions.
Heriot-Watt University, Edinburgh, Scotland
Recent trends in authentication research
The past five years have been especially busy for the study of authentication
techniques, such as passwords. In this talk I will review several recent techniques
for improving authentication security and usability, including the use of honey
passwords and renewed investigations into the use of randomly generated passwords.
In addition, I will discuss the evolution of metrics that are used for measuring
password security, including entropy and more recent attempts to model the guessing
patterns of attackers.
University College London
Graphs and quanta
I will talk about various ways to associate a graph to a quantum mechanical state or a
quantum operation. This is an old story but still a good excuse to use quantum theory to play
with graphs and graphs to play with quantum theory.
University of Lugano, Switzerland
Flexible interpolation for efficient model checking
Craig interpolation is widely used in symbolic model checking. Most interpolation algorithms
construct the interpolant from a proof of unsatisfiability and a fixed labeling determined by which
part of the propositional formula is being over-approximated. While this results in interpolants
with fixed strength, it limits the possibility of generating interpolants of small size. This is
problematic since the interpolant size is a determining factor in achieving good overall performance
in model checking. This talk presents a new framework with a flexible labeling approach to construct
small interpolants. In addition to providing the labeling mechanism guaranteeing small interpolants,
this framework is capable
to manage the strength of the interpolants. This talk presents the flexible interpolation mechanism
in action by demonstrating how it works in incremental model checking. We will discuss two different
applications: verification of multiple properties of the same code and verification of software
upgrades. Both use interpolation as a backbone technology and face different strength requirements.
We will demonstrate how flexible interpolation outperforms any
of the fixed interpolation algorithms.
Charles University, Czech Republic
Understanding transparent and complicated users as instances of preference learning for
Authors: P. Vojtas, M. Kopecky, M. Vomlelova
In this talk we are concerned with user understanding in content based recommendation. We assume we
have a user-item matrix with explicit rating and a time stamp. Our task is to learn/predict user
preference on unseen/unrated items.
In this talk we are not going to present new mining technics nor improve achievements of comparable
efforts on some data and metrics. Rather, we concentrate on three issues:
First, we study three different data sets from the same domain of movie recommendation, trying to
avoid features specific for single data and try to be more generic. We consider also semantic
enrichment of movie data to enable content based recommendation.
Second, we use several metrics which were not used so far in the recommender systems domain. Besides
classical rating approximation with RMSE and top-k we study new metrics for predicting next-k and
(at least) k-hit. We consider those parameters k variable and try to measure their behavior
Finally and most importantly, we trace performance of our methods and metrics as distribution along
each single user. This helps us to detect transparent and complicated users. Improving these metrics
(for instance 1-hit) can contribute to the increase of user satisfaction.
We provide results of experiments with several combination of methods, data sets and metrics along
of these three axes. Nevertheless, all our experiments are offline on public accessible data. A
validation in real online A/B testing needs access to an application. We were not able to realize
this so far. We concentrate only on algorithmic part of recommendation, business and marketing part
of recommendation problem is out of scope of this study.