Schedule of the Summer School

All lectures are 60 min + 15 min for discussion.

Monday, September 21

10:30 - 11:00 Registration & Welcome coffee
11:00 - 12:15 Zoltán Szigeti: Connectivity Problems (Part 1)
12:15 - 14:45 Lunch break
14:45 - 16:00 Kazuo Murota: Discrete Convex Analysis (Part 1)
16:00 - 16:30 Tea and cake
16:30 - 17:45 Eden Chlamtac: Lift-and-Project Methods and Integrality Gaps (Part 1)

Tuesday, September 22

9:15 - 10:30 Heiko Röglin: Smoothed Analysis of Algorithms (Part 1)
10:30 - 11:00 Group Photo and coffee break
11:00 - 12:15 Kazuo Murota: Discrete Convex Analysis (Part 2)
12:15 - 14:45 Lunch break
14:45 - 16:00 Eden Chlamtac: Lift-and-Project Methods and Integrality Gaps (Part 2)
16:00 - 16:30 Tea and cake
16:30 - 17:45 Problem session

Wednesday, September 23

9:15 - 10:30 Eden Chlamtac: Lift-and-Project Methods and Integrality Gaps (Part 3)
10:30 - 11:00 Coffee break
11:00 - 12:15 James Lee: Semi-Definite Extended Formulations and Sums of Squares (Part 1)
12:15 - 14:45 Lunch break
14:45 - 16:00 Zoltán Szigeti: Connectivity Problems (Part 2)
16:00 - 16:30 Tea and cake
16:30 - 17:45 Problem session

Thursday, September 24

9:15 - 10:30 Kazuo Murota: Discrete Convex Analysis (Part 3)
10:30 - 11:00 Coffee break
11:00 - 12:15 James Lee: Semi-Definite Extended Formulations and Sums of Squares (Part 2)
12:15 - 14:45 Lunch break
14:45 - 16:00 Heiko Röglin: Smoothed Analysis of Algorithms (Part 2)
16:00 - 16:30 Tea and cake
16:30 - 17:45 Problem session

Friday, September 25

9:15 - 10:30 James Lee: Semi-Definite Extended Formulations and Sums of Squares (Part 3)
10:30 - 11:00 Coffee break
11:00 - 12:15 Heiko Röglin: Smoothed Analysis of Algorithms (Part 3)
12:15 - 14:45 Lunch break
14:45 - 16:00 Zoltán Szigeti: Connectivity Problems (Part 3)
16:00 - 16:30 Tea and cake

Abstracts

Eden Chlamtac: Lift-and-Project Methods and Integrality Gaps

Lift-and-Project Methods and Integrality Gaps Linear programming (LP) and semidefinite programming (SDP) are powerful convex optimization tools which have become ubiquitous in the field of approximation algorithms in the last few decades. Given a combinatorial optimization problem, a standard approach to obtain an approximation algorithm is as follows: formulate the problem as an integer program (IP), relax this formulation to an LP or SDP, solve the relaxation, and then "round" the solution back to an integer solution.
This approach is limited by the integrality gap of the relaxation: the worst-case gap between the integer optimum and the relaxed optimum. One way of circumventing this gap is to strengthen the relaxation via lift-and-project methods, also known as hierarchies of convex programs, which give a systematic way of iteratively strengthening any relaxation with additional variables and constraints, so that the integrality gap gradually decreases. While in some cases, these methods do not decrease the integrality gap compared to a basic relaxation, in several instances they have been used to improve known approximations for NP-hard problems, or match known combinatorial algorithms which had no known convex-programming based equivalents. We will examine a few such methods, as well as various problems for which they have been successfully applied.

http://www.math.washington.edu/~rothvoss/lecturenotes/lasserresurvey.pdf

http://www.cs.princeton.edu/~chlamtac/sdpchapter.pdf

Top

James Lee: Semi-Definite Extended Formulations and Sums of Squares

Video recording: Lecture 1, Lecture 2, Lecture 3

In the setting of combinatorial optimziation, much of mathematical programming is concerned with finding tractable convex descriptions (both exact and approximate) of combinatorially-defined polytopes.  This endeavor has been particularly fruitful for efficiently finding approximate solutions to NP-hard problems.  Remarkably, there is a way to capture the vast majority of this effort in a well-defined computational model:  Extended formulations (EFs).  In trying to understand the power of EFs, we will encounter connections with communication complexity, proof complexity, Fourier analysis, and eventually quantum information.

Lecture 1:  Extended formulations and lifts of polytopes

Can a complicated polytope be the linear projection of a much simpler polytope in a higher dimension?  We will look at some examples, and see Yannakakis' reframing of the question in terms of the non-negative rank of a matrix.  We will then see how the Sherali-Adams lift-and-project hierarchy has an elegant interpretation in this model.  This connection will play an important role in the sequel.

Lecture 2:  Entropy maximization and approximation by juntas

We will prove a structure theorem, showing that every small lift of the CUT polytope must have a "Sherali-Adams lift" hiding inside it.  The main technical component will be a way of approximating every high-entropy distribution on n bits by a distribution that only depends heavily on a few coordinates.  This will be accomplished by a "boosting" style learning algorithm.

Lecture 3:  PSD rank and sums-of-squares degree

Finally, we will discuss the analogous theory for semi-definite programs, in which non-negative rank is replaced by PSD rank, and Sherali-Adams is replaced by the sum-of-squares hierarchy.  The general strategy from the preceding lecture will carry over once we have the right dictionary: distribution ⇒ quantum state; junta ⇒ low-degree polynomial; Shannon ⇒ von-Neumann.

Top

Kazuo Murota: Discrete Convex Analysis

Video recording: Lecture 1, Lecture 2, Lecture 3

Discrete convex analysis is a general theoretical framework for solvable discrete optimization problems, an outcome of a combination of the ideas in continuous optimization and combinatorial optimization. The framework of convex analysis is adapted to discrete settings and the mathematical results in matroid/submodular function theory are generalized. Symbolically: "Discrete Convex Analysis = Convex Analysis + Matroid Theory". L-convex functions and M-convex functions play primary roles ("L" standing for "Lattice" and "M" for "Matroid"). These concepts generalize, respectively, submodular set functions and matroid basis families.

This lecture consists of three parts. In Part 1 (Convex Analysis View on Combinatorial Optimization), we first enumerate the major aspects of discrete convex optimization in terms of univariate discrete functions, and then discuss these aspects for the minimum spanning tree problem and the weighted bipartite matching problem. In Part 2 (Properties of L-convex and M-convex Functions), we introduce L-convex and M-convex functions, indicate examples in combinatorial optimization, and explain their fundamental properties, with particular emphasis on the relationship to submodularity. In Part 3 (Conjugacy and Duality Theorems and Their Implications), we show the conjugacy between L-convexity and M-convexity and duality theorems (separation theorems and Fenchel-type min-max theorem), and explain that these general theorems in discrete convex analysis encompass major results in submodular optimization, such as Edmonds’ intersection theorem, Frank’s discrete separation theorem Frank’s weight splitting theorem.

Top

Heiko Röglin: Smoothed Analysis of Algorithms

Video recording: Lecture 1, Lecture 2, Lecture 3

The complexity of many classical optimization problems and algorithms seems very well understood. However, very often theoretical results contradict observations made in practice. Many NP-hard optimization problems like the knapsack problem can be solved efficiently in practice and for many problems like linear programming algorithms with exponential worst-case running time outperform known polynomial-time algorithms. The reason for this discrepancy is the pessimistic worst-case perspective adopted by theoreticians, in which the performance of an algorithm is solely measured by its behavior on the worst possible input. This does not take into consideration that worst-case inputs are often rather contrived and occur only rarely in practical applications.

In order to provide a more realistic measure that can explain the practical performance of algorithms, Spielman and Teng introduced in 2001 the framework of smoothed analysis, in which the performance of an algorithm is measured on inputs that are subject to random noise. We will present this framework and results on the smoothed analysis of the simplex method, local search algorithms, and binary optimization problems.
http://www.roeglin.org/teaching/Skripte/ProbabilisticAnalysis.pdf

Top

Zoltán Szigeti: Connectivity Problems

Video recording: Lecture 1, Lecture 2, Lecture 3

We will present two topics concerning the connectivity of graphs: orientation and augmentation.

The basic problems are the following:

  • Does a given undirected graph have a k-arc-connected orientation?
  • Can a given undirected graph be made k-edge-connected by adding L edges?

The two main tools to be applied in these theories will be explained:

  • the uncrossing technique and
  • the splitting off operation.

We will consider Nash-Williams’ orientation theorems on global and local arc-connectivities. A proof will be provided for the global case and extensions will be explained for the local case.

The general method of Frank for solving edge-connectivity augmentation problems will be presented. We will apply it to obtain a min-max theorem on the global edge-connectivity augmentation problem. Generalizations for local edge-connectivity and for hypergraphs will be explained.

szigeti-supp1.pdf

szigeti-supp2.pdf

Top