Homepage ADFOCS 2007 8th Max-Planck Advanced Course on the Foundations of Computer Science
September 3 - September 7, 2007
Saarbrücken, Germany
Homepage ADFOCS 2006



Slides used by prof. Simon and prof. Zomorodian, and solutions to some exercises. Look below at the abstracts of lectures.


There will be eight blocks of lectures, exercises and discussions, two per lecturer. Each block is about 4 hours. A morning or afternoon block will start with 1 1/2 hours of lecture, followed by 1 1/2 hours of exercises (in small groups) with coffee breaks, followed by a 1 hour discussion of the exercises. The only exception is the last block, in which the exercise session will be replaced by a panel discussion about challenges in the research areas of the lecturers. During the exercise periods, the respective lecturer will be around, as well as some fruit, snacks, and drinks.

All lectures will take place in room HS001 ('Hoersaal 1') at the ground floor of the Computer Science builiding (Number E1.3 in the campus map) right next to the MPII building (Number E1.4). The exercise sessions and discussions will take place in room 024 at the ground floor of the MPII building.


September 3
September 4
September 5
September 6
September 7
9.00-13.00 Cesa-Bianchi / Lugosi
The Weighted Average
Cesa-Bianchi / Lugosi
Randomized Prediction
and Applications
Stability of
Clustering II
Cesa-Bianchi / Lugosi
Learning in Repeated
Games, Nash and
Correlated Equilibria
Panel Discussion
13.00-14.30 Lunch
14.30-18.30 Zomorodian
Concepts of
Algebraic Topology
Excursion Simon
Stability of
Clustering I
Cesa-Bianchi / Lugosi
Prediction under
Incomplete Feedback
Evening School Dinner


Below you find short abstracts of the lectures. Clicking on the photos gets you to the respective lecturers' homepage.

Nicolò Cesa-Bianchi

Università degli Studi di Milano
to Prof. Cesa-Bianchi's homepage

Learning, Prediction and Games

Unlike standard statistical approaches to forecasting, prediction of individual sequences does not impose any probabilistic assumption on the data-generating mechanism. Yet, prediction algorithms can be constructed that work well for all possible sequences, in the sense that their performance is always nearly as good as the best forecasting strategy in a given reference class. In this course we will first introduce a general model for sequential prediction and define some basic deterministic and randomized prediction strategies for various loss functions. We will then analyze their application to problems of prediction with partial feedback and to repeated game playing, including multiarmed bandit problems, calibration, and minimization of internal regret.

The lectures will be based on the monography:

Prediction, learning, and games
Nicolň Cesa-Bianchi and Gábor Lugosi
Cambridge University Press, 2006

Gábor Lugosi

Pompeu Fabra University
to Prof. Lugosi's homepage

Hans Ulrich Simon

Ruhr-Universität Bochum
to Prof. Simon's homepage

Stability of Clustering

The goal of cluster analysis is to partition observed data into groups (called "clusters") so that the pairwise dissimilarities between data assigned to the same cluster tend to be smaller than those in different clusters. Although clustering is one of the most widely used techniques for exploratory data analysis in various fields, the building of a solid theoretical foundation is still in progress. Among the methods in cluster analysis that currently attracts considerable attention from both sides, theoreticians and applied researchers, we find the "stability approach". The intuitive idea behind "stability" is as follows: if we repeatedly sample data points and apply a clustering algorithm, then a "good" algorithm should produce clusterings that do not vary much from one sample to another. In the tutorial, we present some new results that aim at a "theory of stability" by providing a rigorous mathematical analysis within a clean formal framework. In the first lecture, we derive necessary and sufficient conditions for stable behavior of a clustering algorithm. In the second lecture, we apply the general results to so-called risk-minimizing algorithms whose risk function is induced by a "dissimilarity measure". These clustering algorithms represent a rich family with the popular "k-means" algorithm as a special case.

The topic of the tutorial is "brand-new" so that we cannot provide pointers to textbooks dealing with stability of clustering. In order to get a general idea of cluster analysis, we recommend to follow (one or more of) the following suggestions:

  1. Apply GOOGLE to the keyword "Clustering" and see what you get.
  2. Have a look in Chapter 6 of the classical book "Pattern Classification and Scene Analysis" by Duda and Hart.
  3. Have a look in Chapter 14.3 of the book "The Elements of Statistical Learning" by Hastie, Tibshirani, and Friedman.

Later (but before the tutorial), we will provide further material about clustering stability that will be fairly self-contained (including up-to-date pointers to the relevant literature and talk slides.)


Slides wow available for downloading:
Slides Part I, Slides Part II,

Solutions to exercises:
Exercise 4, Exercise 5, Exercise 6, Exercise p.05, Exercise p.26, Exercise p.27,

Afra Zomorodian

Dartmouth College
to Prof. Zomorodian's homepage

Topological Data Analysis

We often seek to understand the structure of a set of data points. Generally, we assume that the points are sampled from some underlying space in which we are interested. One approach to data analysis is clustering, where we assume that the underlying space had multiple components. As such, clustering attempts to recover the way in which the original space was connected -- its connectivity. Topological data analysis -- the topic of my two lectures -- extends this analysis by examining other attributes of connectivity, such as the existence of tunnels or enclosed spaces.

In the first lecture, we will cover some basic concepts from topology including important invariants that characterize a space. We will also look at structures that we may use to compute these invariants from point sets.

In the second lecture, we will discuss persistent homology, a theory for analyzing point sets at multiple scales. We will also look at some recent applications of persistent homology.

In both lectures as well as the exercises, I hope to include discussion of current software for topological analysis.

The lectures will be partly based on the book:
Topology for Computing
Afra Zomrodian
Cambridge University Press, 2005

The material in the book is based on my dissertation which can be found here.

There are also useful notes from the most recent incarnation of a course on computational topology.

Slides wow available for downloading:
Slides Part I, Slides Part II,


ADFOCS 2007 is organized by Ernst Althaus, Joachim Giesen & Evangelia Pyrga. Help with local arrangements: Petra Mayer. For comments or questions send an email to adfocs07[at]mpi[minus]inf[dot]mpg[dot]de.

ADFOCS 2007 organized by Ernst Althaus, Joachim Giesen & Evangelia Pyrga. WWW page last updated on Wednesday, 12 September 2007.