Schedule of the Relaxation Workshop

Monday, November 16

10:15 - 10:50 Registration & Welcome coffee
10:50 - 11:00 Opening remarks
11:00 - 11:30 Mohit Singh: Maximizing Determinants under Partition Constraints
11:30 - 12:00 Rico Zenklusen: Online Contention Resolution Schemes
12:00 - 15:00 Lunch break and discussions
15:00 - 15:30 Ramamoorthi Ravi: Designing Overlapping Networks for Publish-Subscribe Systems
15:30 - 16:00 Christos Kalaitzis: Approximating the Maximum Budgeted Allocation Problem using the Configuration-LP
16:00 - 16:30 Tea and cake
16:30 - 17:00 Daniel Dadush: On the Geometry of Linear Programming
17:00 - 17:30 Parinya Chalermsook: Graph products, hardness of approximation, and beyond
20:00 - Concert at the Arithmeum (Lennéstrasse 2): Gruppo Ocarinistico Budriese

Friday, November 20

9:30 - 10:00 Nisheeth Vishnoi: Natural Algorithms for Flow Problems
10:00 - 10:30 Viswanath Nagarajan: Approximation-Friendly Discrepancy Rounding
10:30 - 11:00 Coffee break
11:00 - 12:00 Aravind Srinivasan: Approximating integer-programming problems by partial resampling
12:00 - 14:00 Lunch break
14:00 - 16:00 Discussion session
16:00 - 16:30 Tea and cake – end of workshop


(Underlined titles can be clicked for the video recording)

Haxell's condition is a natural hypergraph analog of Hall's condition, which is a well-known necessary and sufficient condition for a bipartite graph to admit a perfect matching. That is, when Haxell's condition holds it forces the existence of a perfect matching in the bipartite hypergraph. Unlike in graphs, however, there is no known polynomial time algorithm to find the hypergraph perfect matching that is guaranteed to exist when Haxell's condition is satisfied.

We prove the existence of an efficient algorithm to find perfect matchings in bipartite hypergraphs whenever a stronger version of Haxell's condition holds. Our algorithm can be seen as a generalization of the classical Hungarian algorithm for finding perfect matchings in bipartite graphs. The techniques we use to achieve this result could be of use more generally in other combinatorial problems on hypergraphs where disjointness structure is crucial, e.g. Set Packing.


Sylvia Boyd: Toward a 6/5 bound for the minimum cost 2-edge connected subgraph problem

Given a complete graph G = (V,E) with non-negative edge costs c, the problem 2EC is that of finding a 2-edge connected spanning multi-subgraph of G of minimum cost. The linear programming relaxation 2ECLP gives a lower bound for 2EC, and it has been conjectured that its integrality gap is 6/5. In this paper, we explore the idea of using the structure of solutions for 2ECLP and the concept of convex combination to obtain improved approximation algorithms for 2EC. We focus our efforts on a family J of half-integer solutions that appear to give the largest integrality gap for 2ECLP. We successfully show that for any x* in J, there exists a solution for 2EC of cost at most (6/5)cx*, improving upon the previous best value of 4/3 and proving that the conjecture holds true for this family.

Joint work with Philippe Legault.


Graph product is an old topic in graph theory. In TCS, it has been used since the 80s as a standard way to boost hardness of approximation. In this talk, we report two new applications of graph products:

1) We show that lexicographic products can be used to "fix" the candidate reductions that do not initially work. As applications, we settled the approximability of more than 15 open problems related to graph packing, such as, Edge-disjoint paths on DAGs, k-cycle packing for large k, bipartite induced matching, poset dimension, boxicity, and bipartite dimension.

The structure of lexicographic products can further be exploited to prove strong impossibility of PAC learning results. We illustrate this by showing that Deterministic Finite Automata is not PAC-learnable unless NP=RP. This resolved an open question in 1989 by Pitt and Warmuth.

2) We show that randomized disjunctive products can be used to derive nearly tight time-approximation tradeoff for approximating Maximum Independent Set and Graph Coloring, i.e. r-approximating these problems requires running time 2^{n1-ε/r1+ε}. This is one of the important ingredients we used in settling the hardness of some classical pricing problems.

Our techniques built on, strengthened, and unified several known tools in the hardness of approximation literature.

The talk is mostly based on joint work with Bundit Laekhanukit and Danupon Nanongkai that appeared in FOCS 2013, FOCS 2014, and unpublished manuscripts.


In 1979, Khachiyan used a "numerical trick" to show that linear programming (LP) is polynomially solvable using the ellipsoid method. Since that time, a huge variety of powerful algorithms for LP have been developed, from rescaling methods to interior point methods, with faster and faster convergence rates. Despite the steady progress in running times, a deeper understanding of the geometry behind the "numerical trickery" has proved elusive.

In this work, we provide a new fine grained view of the relevant geometry for solving linear programs exactly. More precisely, we define a combination of three geometric potentials (which can be bounded by the size of the numbers), that together control the complexity of finding a point in the relative interior of any linear system together with a certificate of this fact. For this purpose, we provide a novel polynomial time "rescaled" interior point method for LP. Conversely, we show that given any such solution together with the certificate, one can define a simple scaling of the rows of the linear system such that all the aforementioned potentials become strongly polynomial.


Samuel Fiorini: No Small Linear Program Approximates Vertex Cover within a Factor 2-ε

The vertex cover problem is one of the most important and intensively studied combinatorial optimization problems. Khot and Regev (2003) proved that the problem is NP-hard to approximate within a factor 2-ε, assuming the Unique Games Conjecture (UGC). This is tight because the problem has an easy 2-approximation algorithm. Without resorting to the UGC, the best inapproximability result for the problem is due to Dinur and Safra (2002): vertex cover is NP-hard to approximate within a factor 1.3606.

We prove the following unconditional result about linear programming (LP) relaxations of the problem: every LP relaxation that approximates vertex cover within a factor 2-ε has super-polynomially many inequalities. As a direct consequence of our methods, we also establish that LP relaxations (as well as SDP relaxations) that approximate the independent set problem within any constant factor have super-polynomial size.

Joint work with Abbas Bazzi, Sebastian Pokutta and Ola Svensson.


In this talk we consider polynomial matroid optimisation problems with some non-linear monomials in the objective function. The monomials are linearised and we study the corresponding polytopes. Extending results of Edmonds we present complete descriptions for the linearised polytopes for two different kinds of objective functions. The problems of the first kind contain a set of nested monomials and the ones of the second kind a set of monomials fulfilling certain up- and downward completeness conditions. Indeed, apart from the standard linearisation in these cases, one only needs appropriately strengthened rank inequalities. The separation problems of these inequalities reduce to submodular function minimization problems. These results for specially structured problems lead to a new hierarchy for solving general polynomial matroid optimisation problems.

This is joint work with Frank Fischer and Thomas McCormick.


Michel Goemans: Rounding linear programs: A few recent examples

A classical approaching for taming hard combinatorial optimization problems is to solve a linear programming relaxation and round the corresponding fractional solution. In this lecture, we will review several rounding techniques and illustrate them. Among other results, we will discuss recent advances based on discrepancy and randomized iterative rounding for Steiner trees.


We show that for a random d-dimensional 0/1-polytope the smallest size of any semidefinite extended formulation is exponential in d by building upon nothing else than a simple well-known property of maximum volume inscribed ellipsoids of convex bodies. In particular, the proof does not rely on factorizations over the semidefinite cone and thus avoids involved procedures of balancing them as required by the original proof given by Briet, Dadush, and Pokutta. We hope that revealing the geometry behind the phenomenon opens doors for further extensions.

Joint work with Gennadiy Averkov and Stefan Weltge.


The Maximum Budgeted Allocation Problem is the problem of assigning indivisible items to agents, which have budget constraints, in order to maximize our total revenue. While the natural Assignment-LP for this problem is well-understood, and has an integrality gap of 3/4, the same is not true for the Configuration-LP, for which the upper and lower bounds we have on the integrality gap are far apart. In this talk, I will explain why the Configuration-LP is strictly stronger than the Assignment-LP for this problem, presenting a rounding algorithm which also provides the best known approximation ratio for the problem.


Minimizing a polynomial f over a region K defined by polynomial inequalities is a hard problem, for which various hierarchies of relaxations have been proposed in the recent years, in particular by Lasserre and Parrilo. These hierarchies are based on using sums of squares representations of positive polynomials and the dual theory of moments. Roughly speaking, one can get lower bounds by finding suitable sums of squares representations of f and one can find upper bounds by using probability measures on the set K with suitable density function (like being a sum of squares). In this lecture we focus on the convergence analysis of some of these hierarchies. While general results exist, however with weak convergence, we will discuss several instances where sharper convergence results can be shown. We will consider, in particular, polynomial optimization over the simplex and the hypercube, and a measure based hierarchy of upper bounds for optimization over a compact body.


Semi-definite programming is a powerful model of computation, but under widely accepted complexity-theoretic assumptions, it should not be able to solve NP-hard problems. Until recently, it was unknown whether small SDPs could exactly capture, for instance, the traveling salesman polytopes.

We introduce a method for proving lower bounds on the size of SDP relaxations for combinatorial problems. In particular, we show that the cut, TSP, and stable set polytopes on n-vertex graphs cannot be characterized by spectrahedra of sub-exponential dimension. A spectrahedron is the feasible region of an SDP: The intersection of the positive semi-definite cone and an affine subspace. This yields the first super-polynomial lower bound on the semi-definite extension complexity of any explicit family of polytopes.

Our results follow from a general technique for proving lower bounds on the positive semi-definite rank of a non-negative matrix. To this end, we establish a close connection between arbitrary SDPs and those arising from the sum-of-squares SDP hierarchy.

This is based on joint work with Prasad Raghavendra and David Steurer.


With W. Morris in 1992, I introduced the idea of comparing polytopes relevant to combinatorial optimization via calculation of n-dimensional volumes. I will review some of that work (related to fixed-charge problems) and describe some new work, with E. Speakman, relevant to the spatial branch-and-bound approach to global optimization. In this new work, we calculate exact expressions for 4-dimensional volumes of natural parametric families of polytopes relevant to different convex relaxations of trilinear monomials. As a consequence, we have practical guidance: (i) for tuning an aspect of spatial branch-and-bound implementations, (ii) at the modeling level.


We give an overview of the following lower bound results for the Lasserre hierarchy:

  • The Lasserre hierarchy is known to converge to the 0/1 polytope in n levels. We characterize the set of 0/1 integer linear problems and unconstrained 0/1 polynomial optimization problems that can still have an integrality gap at level n-1. These problems are the hardest for the Lasserre hierarchy in this sense.
  • For a classical scheduling problem that can be solved in O(n log n) we prove that the Lasserre hierarchy has an unbounded integrality gap at level Ω(√n).
  • When the initial problem formulation exhibits a high degree of symmetry we show how to reduce the study of the positive semidefiniteness to the analysis of "well-behaved" univariate polynomial inequalities. As an example, we give a short elementary proof of Grigoriev/Laurent lower bound for finding the integer cut polytope of the complete graph.

Joint work with A. Kurpisz and S. Leppanem.


Matthias Mnich: Improved Approximation Algorithm for Minimum Feedback Vertex Sets in Tournaments

We consider the minimum feedback vertex set problem in tournaments, which finds applications in ranking scenarios. This problem is NP-hard to solve exactly, and Unique Games-hard to approximate by a factor better than two. We present an approximation algorithm for this problem that improves on the previously best known ratio 5/2, given by Cai et al. in FOCS 1998 / SICOMP 2001.

Joint work with Lászlo A. Végh.


We consider the general problem of rounding a fractional vector to an integral vector while (approximately) satisfying a number of linear constraints. Randomized rounding and discrepancy-based rounding are two of the strongest rounding methods known. However these algorithms are very different and the violation bounds achieved by them are incomparable. Randomized rounding leads to good multiplicative violation, whereas discrepancy-based rounding leads to much better additive violation. Our first contribution is a new discrepancy-based rounding algorithm that simultaneously yields strong additive as well as multiplicative violation bounds. Our second contribution is an extension of this rounding algorithm to matroid polytopes.

This is joint work with Nikhil Bansal.


Alantha Newman: The Alternating Stock Size Problem and the Gasoline Puzzle

Given a set S of integers whose sum is zero, consider the problem of finding a permutation of these integers such that:

  • all prefixes of the ordering are non-negative, and
  • the maximum value of a prefix sum is minimized.  

Kellerer et al. refer to this problem as the "Stock Size Problem" and showed that it can be approximated to within 3/2. They also showed that an approximation ratio of 2 can be achieved via several simple algorithms.

We consider a related problem, which we call the "Alternating Stock Size Problem" in which the number of positive and negative integers in the input set S are equal.  The problem is the same as above, but we are additionally required to alternate the positive and negative numbers in the output ordering. This problem also has several simple 2-approximations. We show that it can be approximated to within 1.79.

Then we show that this problem is closely related to an optimization version of the Gasoline Puzzle due to Lovasz, in which we want to minimize the size of the gas tank necessary to go around the track. We give approximation algorithms for this problem as well, based on rounding LP relaxations whose feasible solutions are convex combinations of permutation matrices.

This is based on joint works with Johanna Seif (ENS Lyon) and Heiko Röglin (Universität Bonn).


Sebastian Pokutta: Affine reductions in Extended Formulations

Linear and semidefinite programming are two core optimization paradigms with many important applications in mathematics, engineering, and business. However, the expressive power of these modeling paradigms is only partially understood so far and extended formulations are a powerful and natural tool to analyze the possibilities and limitations of linear and semidefinite programming formulations. More precisely, extended formulations are concerned with studying the optimal representation of a combinatorial optimization problem in terms of LPs, SDPs, or alternative conic programming paradigms. An extended formulation is a higher dimensional description of the problem utilizing auxiliary variables.

In this talk I will provide an introduction to extended formulations and I will lay out a reduction framework for establishing upper and lower bounds for the size of exact and approximate LP and SDP formulations. This framework allows for surprisingly simple and convenient analyzes without relying on any heavy machinery, making extended formulations very accessible without requiring any indepth prior knowledge of those results establishing the base hardness. I will conclude with various open problems both in the exact and approximate as well as linear and semidefinite case.

Joint work with Gábor Braun and Daniel Zink.


From the publish-subscribe systems of the early days of the Internet to the recent emergence of Web 3.0 and IoT (Internet of Things), new problems arise in the design of networks centered at producers and consumers of constantly evolving information. In a typical problem, each terminal is a source or sink of information and builds a physical network in the form of a tree or an overlay network in the form of a star rooted at itself. Every pair of pub-sub terminals that need to be coordinated (e.g. the source and sink of an important piece of control information) define an edge in a bipartite demand graph; the solution must ensure that the corresponding networks rooted at the endpoints of each demand edge overlap at some node. This simple overlap constraint, and the requirement that each network is a tree or a star, leads to a variety of new questions on the design of overlapping networks. In this work, for the general demand case of the problem, we show that a natural LP formulation has a non-constant integrality gap; on the positive side, we present a logarithmic approximation for the general demand case. When the demand graph is complete, however, we design approximation algorithms with small constant performance ratios, irrespective of whether the pub networks and sub networks are required to be trees or stars.

Joint work with Jennifer Iglesias (CMU), Rajmohan Rajaraman & Ravi Sundaram (Northeastern U).


David Shmoys: Approximation algorithms for a bike-sharing rebalancing problem

New York launched the largest bike-sharing system in North America in May 2013. We have worked with Citibike, using analytics and optimization to change how they manage the system. Huge rush-hour usage imbalances the system; we answer both of the questions: where should bikes be at the start of a rush hour and how can we mitigate the imbalances that develop? Although we focus in this talk on the latter question, we will briefly touch on an approach based continuous-time Markov chains combined with integer programming models for the former. For the latter, we present a 3-approximation algorithm used to target limited available rebalancing resources during rush-hour periods. The goal is to ensure that users are never too far from a station that will be rebalanced when looking for a bike or a spot to return one. We formulate this as a variant of the k-center problem and provide a linear programming-based algorithm with a performance guarantee of 3.

This is joint work with Shane Henderson, Eoin O'Mahoney, and Ola Svensson.


Mohit Singh: Maximizing Determinants under Partition Constraints

Given a positive semi-definite matrix L whose columns and rows are indexed by a set U, and a partition matroid M=(U,I), we study the problem of selecting a basis B of M such that the determinant of submatrix of L, restricted to rows and columns of B, is maximized. This problem appears in diverse areas including determinantal point processes in machine learning, experimental design, geographical placement problem, discrepancy theory and convex geometry. Our main result is to give an er-approximation for the problem where r denotes the rank of the partition matroid M.

We first give a geometric convex program for the problem whose solution is then rounded to an integral solution. To analyze the rounding algorithm, we relate the solution of our algorithm as well the objective value of the relaxation to a certain stable polynomial. To prove the approximation guarantee, we utilize a general inequality about stable polynomials proved by Gurvits in the context of estimating the permanent of a doubly stochastic matrix. Joint work with Sasho Nikolov.


Partial resampling is a variant of the Moser-Tardos algorithm for the Lovasz Local Lemma; it was developed by Harris and the speaker (FOCS 2013). We present two families of applications of this framework for approximating column-sparse integer programs, parametrized by the maximum L1 norm of any column: for "assignment packing" problems (e.g., in packet scheduling) and for covering integer programs. The first is joint work with David Harris (FOCS 2013), and the second is joint work with Antares Chen and David Harris (SODA 2016).


Nisheeth Vishnoi: Natural Algorithms for Flow Problems

In the last few years, there has been a significant interest in the computational abilities of Physarum Polycephalum (a slime mold). This drew from a remarkable experiment which showed that this organism can compute shortest paths in a maze. Subsequently, the workings of Physarum were mathematically modeled as a dynamical system and algorithms inspired by this model were proposed to solve several graph problems. If these algorithms could be shown to work, it would raise the tantalizing possibility that nature, via evolution, has developed algorithms that could efficiently solve some of the most complex graph problems. In this talk I will show that Physarum-based algorithms can efficiently solve flow problems on undirected and directed graphs.

This is joint work with Damian Straszak.


We study the reformulation of integer linear programs by means of a mixed integer linear program with fewer integer variables. Such reformulations can be solved efficiently with mixed integer linear programming techniques.

We exhibit a variety of examples that demonstrate how integer programs can be reformulated using much less integer variables. To this end, we introduce a generalization of total unimodularity called the affine TU-dimension of a matrix and study related theory and algorithms for determining the affine TU-dimension of a matrix. We also present bounds on the number of integer variables needed to represent certain integer hulls. Finally, we prove that any mixed integer extended formulation of the matching polytope requires nearly linearly many integrality constraints.

Joint work with Jörg Bader, Robert Hildebrand, Rico Zenklusen.


Dantzig-Wolfe reformulation of an integer program convexifies a subset of the constraints, which yields an extended formulation with a potentially stronger linear programming (LP) relaxation than the original formulation. This paper is part of an endeavor to understand the strength of such a reformulation in general. We investigate the strength of Dantzig-Wolfe reformulations of the edge formulation for the maximum weighted stable set problem. Since every constraint in the edge formulation corresponds to an edge of the underlying graph, a Dantzig-Wolfe reformulation for the edge formulation consists of choosing a subgraph and convexifying all constraints corresponding to edges of this subgraph. We characterize Dantzig-Wolfe reformulations not yielding a stronger LP relaxation (than the edge formulation) as reformulations where this subgraph is bipartite. Furthermore, we analyze the structure of facets of the stable set polytope and present a characterization of Dantzig-Wolfe reformulations with the strongest possible LP relaxation as reformulations where the chosen subgraph contains all odd holes (and 3-cliques) of the initial graph.

Joint work with Marco Lübbecke.


Rico Zenklusen: Online Contention Resolution Schemes

We introduce a new rounding technique for online optimization problems, which is related to contention resolution schemes, a technique initially introduced in the context of submodular function maximization. Our rounding technique, which we call online contention resolution schemes (OCRSs), is applicable to various online selection problems, including Bayesian online selection, oblivious posted pricing mechanisms, and stochastic probing models. Furthermore, OCRSs share many properties of offline contention resolution schemes. In particular, OCRSs for different constraint families can be combined to obtain an OCRS for their intersection. Moreover, using OCRSs, we can approximately maximize submodular functions in various online settings.

This is joint work with Moran Feldman and Ola Svensson.


Stanislav Zivny: The power of Sherali-Adams relaxations for general-valued CSPs

In this talk, we survey recent results on the power of LP relaxations (the basic LP relaxation and Sherali-Adams relaxations) in the context of valued constraint satisfaction problems (VCSP). We give precise characterisations of constraint languages for which these relaxations are exact, and also discuss computational complexity consequences.