Menu Content/Inhalt
Tutorials

Tutorials will be held in three-hours sessions on June 10th, 2008.

Registration

The registration fees per tutorial are:
Standard Rate: 35 €
Undergraduate students: 15 €

To register for the tutorials, please follow the link to our Conference Management System below, then click on "Conference Registration" at the top of the page and then on "Register as Participant" and fill in the form. There is a checkbox for each tutorial on the registration form. If you already have an author account, you can log in to the Conference Management System for registration.

Click here for Tutorial Registration
Conference Management System http://dagm2008.confmaster.net


Kernel methods for Object Recognition:

Since their invention in the early 1990s, support vector machines (SVMs) have evolved into the currently most popular classification tool. Because SVMs have few parameters and because efficient software packages are freely available, SVMs today are used not only in computer science, but also in engineering and many other disciplines where binary classification is required. Other methods that also rely on the kernel principle have attracted less attention outside of machine learning research: regression, dimensionality reduction, clustering and much more can be formulated in a way that makes use of the same kernel framework as SVM do. In summary, this family of techniques is called "kernel methods".
The tutorial will give an introduction into SVMs as well as less well known methods. We will mostly ignore the vast amount of theory that exists in the field and instead take a geometric point of view, concentrating on feature spaces as a common link between all kernel methods. Going beyond the textbooks, we will give an introduction into two recent developments that have the potential to become the "next big thing": multiple kernel learning (MKL) and the learning of structured outputs by means of a joint kernel function.
Although kernel methods are application agnostic, we will access the field from a Computer Vision point of view and use examples from object classification and object localization for illustration. Nevertheless, we will see that kernel methods are applicable to a wide range of problems, including some where one wouldn't have expected them, e.g. feature selection or the learning of structure for graphical models.
more

Graphical models for Pattern Recognition:

Graphical Models (GM) can be considered as novel probabilistic framework for modeling the statistical dependencies between variables. The resulting joint probabilities can be estimated with machine learning techniques and used in order to compute posterior probabilities for pattern classification tasks. GMs combine the power of graph theory with probability theory and include many well-known established techniques, such as Baysian nets or Hidden Markov Models (HMMs). However, GMs are able to express far more complicated statistical relationships than the before mentioned paradigms and are therefore much more powerful. They became just recently more and more popular for complex statistical pattern recognition tasks in computer vision, speech recognition and multimodal human-machine communication.
In the tutorial we will discuss Graphical Models from the scratch. We will start with graph theory which contains directed and undirected graphs, cycles, cliques and separators. After that we will deal with conditional independence and Markov properties. The training of Graphical Models will also be part of the tutorial as well as the close relationship between different well known HMMs and GMs. Other topics which will be addressed in the tutorial are junction-tree, message-passing and inference. We will conclude the tutorial with some applications which may contain automatic video editing, dominance detection, group action and event recognition in meetings, handwriting recognition on whiteboards, or spoken language understanding.
more

 



Imprint-Dataprotection