MEMICS
MEMICS
Doctoral Workshop on Mathematical and
Engineering Methods in Computer Science
 
organized jointly by the Masaryk University
and the Brno University of Technology, Czechia
2014
NEWS
 
December 21, 2014
MEMICS 2015
We are glad to invite you to MEMICS 2015.
November 21, 2014
Best paper awards
Best papers announced.
MAIN FACTS
MEMICS IN BRIEF
INVOLVEMENT
IMPORTANT DATES
PROGRAMME
INVITED SPEAKERS
PASTIME
SOCIAL EVENTS
LIST OF PARTICIPANTS
LOCALE
VENUE
MAP
October 17—19, 2014    University Centre    Telc    Czech Republic
 

Invited speakers
Daniel Lokshtanov
University of Bergen, Norway
Tree Decompositions and Graph algorithms
A central concept in graph theory is the notion of tree decompositions - these are decompositions that allow us to split a graph up into ``nice'' pieces by ``small'' cuts. It is possible to solve many algorithmic problems on graphs by decomposing the graph into ``nice'' pieces, finding a solution in each of the pieces, and then gluing these solutions together to form a solution to the entire graph. Examples of this approach include algorithms for deciding whether a given input graph is planar, the k-Disjoint paths algorithm of Robertson and Seymour, as well as a plenthora of algorithms on graphs of bounded tree-width.
By playing with the formal definition of ``nice'' one arrives at different kinds of decompositions, with different algorithmic applications. For an example graphs of bounded treewidth are graphs that may be decomposed into ``small'' pieces by ``small'' cuts. The structure theorem for minor-free graphs of Robertson and Seymour states that minor-free graphs are exactly the graphs that may be decomposed by ``small'' cuts into pieces that ``almost'' can be drawn on a surface of small genus.
In this talk we will ask the following lofty question: is it possible that every graph has one, ``nicest'' tree decomposition which simultaneously decomposes the graph into parts that are ``as nice as possible'' for any reasonable definition of nice? And, if such a decomposition exists, how fast can we find such it algorithmically?

Michael Tautschnig
Queen Mary University of London
Automating Software Analysis at Very Large Scale
Actual software in use today is not known to follow any uniform normal distribution, whether syntactically---in the language of all programs described by the grammar of a given programming language, or semantically---for example, in the set of reachable states. Hence claims deduced from any given set of benchmarks need not extend to real-world software systems.
When building software analysis tools, this affects all aspects of tool construction: starting from language front ends not being able to parse and process real-world programs, over inappropriate assumptions about (non-)scalability, to failing to meet actual needs of software developers.
To narrow the gap between real-world software demands and software analysis tool construction, an experiment using the Debian Linux distribution has been set up. The Debian distribution presently comprises of more than 22000 source software packages. Focussing on C source code, more than 400 million lines of code are automatically analysed in this experiment, resulting in a number of improvements in analysis tools on the one hand, but also more than 700 public bug reports to date.

Gianni Antichi
University of Cambridge
Hardware accelerated networking systems: practice against theory
Computer networks are the hallmark of the 21st centurys society and underpin virtually all infrastructures of the modern world. Building, running and maintaining enterprise networks is getting ever more complicated and difficult. Part of the problem is related to the proliferation of real-time applications (voice, video, gaming), which demand higher bandwidth and low-latency connections pushing network devices to work at higher speeds. In this scenario, hardware acceleration comes to aid speeding up time-critical operations. This talk will introduce the most common network processing hardware accelerated operations such as IP-lookup, packet classification and network monitoring taking as a reference the widely used NetFPGA platform. We will present a list of technical challenges that must be addressed to transition from a simple layer-2 switching device to a highly accurate network monitoring system, passing by a packet classifier.

Jozef Ivanecký
European Media Laboratory
Today's Challenges for Embedded ASR
Automatic Speech Recognition (ASR) is pervading nowadays to areas unimaginable few years ago. Such a progress in past few years was achieved not because of core embedded ASR technology improvement, but mainly because of massive changes in ˙˙Smart phones world˙˙ as well as availability of a small powerful and affordable Linux based HW. These changes answered two important questions: 1. How to make ASR always available? 2. How to install local affordable ASR system almost anywhere?
In recent years we can also observe grow of freely available ASR systems with acceptable speed and accuracy. Together with changes in mobile world it is possible to embed remote ASR into applications very quickly and without deep knowledge about the speech recognition. What is the future of real embedded ASR systems in this case?
The goal of this talk is to present two embedded ASR applications which would not be possible without above mentioned changes over recent years and point out their advantages in contrast to today˙˙s quick solutions. The first one demonstrates how changes in users behaviors allowed to design usable voice enabled house control application accepted by all age groups. The focus of second one is mainly on extremely reliable in˙˙car real time speech recognition system which can use also remote ASR for some specific tasks.

Stefan Wörz
University of Heidelberg
3D Model-Based Segmentation of 3D Biomedical Images
A central task in biomedical image analysis is the segmentation and quantification of 3D image structures. A large variety of segmentation approaches have been proposed including approaches based on different types of deformable models. A main advantage of deformable models is that they allow incorporating a priori information about the considered image structures. In this contribution we give a brief overview of often used deformable models such as active contour models, statistical shape models, and analytic parametric models. Moreover, we present in more detail 3D analytic parametric intensity models, which enable accurate and robust segmentation and quantification of 3D image structures. Such parametric models have been successfully used in different biomedical applications, for example, for the localization of 3D anatomical point landmarks in 3D MR and CT images, for the quantification of vessels in 3D MRA and CTA images, as well as for the segmentation of cells and subcellular structures in 3D microscopy images.

Derek Groen
Centre for Computational Science, University College London
High-performance multiscale computing for modelling cerebrovascular bloodflow and nanomaterials
Stroke is a leading cause of adult disability, and responsible for 50,000 deaths in the UK in 2012. Brain haemorrhages are responsible for 15% of all strokes in the UK, and 50% of all strokes in children. Within UCL we have developed the HemeLB simulation environment to try and obtain more understanding about Brain haemorrhages and blood flow in sparse geometries in general. In this talk I will introduce HemeLB and summarize the many research efforts made around it in recent years. I will present a cerebrovascular blood flow simulation which incorporates input from the wider environment in a cerebrovascular network by coupling a 1D discontinuous Galerkin model to a 3D lattice-Boltzmann model, as well as several advances that we have made to improve the performance of our code. These include vectorization of the code, improved domain decomposition techniques and some preliminary results on using non-blocking collectives.
I will also present our ongoing work on clay-polymer nanocomposites where we use a three-level multiscale scheme to produce a chemically-specific model of clay-polymer nanocomposites. We applied this approach to study collections of clay mineral tactoids interacting with two synthetic polymers, polyethylene glycol and polyvinyl alcohol. The controlled behaviour of layered materials in a polymer matrix is centrally important for many engineering and manufacturing applications. Our approach opens up a route to computing the properties of complex soft materials based on knowledge of their chemical composition, molecular structure and processing conditions.

 
 
MEMICS Photo
 
MEMICS
 
Copyright © FI MU
Copyright © FIT VUT
Brno, 2005–2014
FIFIT
Photos and Design