Trimester Seminar

Venue: HIM lecture hall, Poppelsdorfer Allee 45
Organizers: András Frank, Satoru Iwata, Jochen Könemann, Jens Vygen

Wednesday, September 2

15:00 - 16:00 Welcome Meeting

Abstract: Welcome meeting with the participants and organizers of the Trimester Program.

Friday, September 4

14:30 - 15:30 Nicolai Hähnle: Multiplicative Weights and Time-Cost Tradeoff

Tuesday, September 15

15:00 - 16:00 Anupam Gupta: The Independent Set problem on Degree-d Graphs

Video recording

Abstract: The independent set problem on graphs with maximum degree d is known to be Ω(d/log2 d) hard to approximate, assuming the unique games conjecture. However, the best approximation algorithm was worse by about an Ω(log d) factor. In a recent breakthrough, Bansal showed how to use few rounds of the SA+ hierarchy to estimate the size of the optimal independent set to within Õ(d/log2 d), essentially closing the gap. Some questions remained: could we find such an IS? And did we really need the SA+ lifting step?

In this talk, we mostly focus on a result showing that the standard SDP, based on the Lovasz Theta function, gives a Õ(d/log3/2 d) approximation without using any lift/project steps. I will also mention how one can convert Bansal's algorithm for IS size estimation using SA+ into an approximation algorithm. Both results, just like Bansal's, are based on Ramsay-theoretic results, potentially of independent interest.

This is based on joint work with Nikhil Bansal (TU Eindhoven) and Guru Guruganesh (CMU).

Thursday, September 17

14:30 - 15:15 Matthias Poloczek: New Approximation Algorithms for MAX SAT: Simple, Fast, and Excellent in Practice

Video recording

Abstract: We present simple randomized and deterministic algorithms that obtain 3/4-approximations for the maximum satisfiability problem (MAX SAT) in linear time. In particular, their worst case guarantees are comparable to Yannakakis' algorithm based on flows and LP (1994) or the LP-rounding algorithm of Goemans and Williamson (1994).

In order to examine their practical performances, we evaluate the algorithms on a comprehensive set of instances stemming from industrial applications: our deterministic algorithm satisfies more than 99% of the clauses on average and in particular performs better than a variety of state-of-the-art algorithms.

But it does not achieve the outstanding results of Spears' simulated annealing. Therefore, we propose a hybrid algorithm whose performance matches Spears' while being considerably faster.

Based on joint work with Georg Schnitger, David P. Williamson, and Anke van Zuylen.

15:15 - 16:00 Gyula Pap: Linear matroid matching in the oracle model

Video recording

Abstract: Linear matroid matching is understood as a special case of matroid matching when the matroid is given with a matrix representation. However, for certain examples of linear matroids, the matrix representation is not given, and actually, it seems a very challenging problem to find one. A linear matroid may be given in an oracle model, when the matroid is known by an independence oracle only, and no matrix representation is given - examples arise in connectivity, rigidity, and count matroids. To solve the linear matroid matching problem in the oracle model, we need to get rid of linear algebraic computation used in all previously known algorithms for linear matroid matching, including those of Lovász; Gabow and Stallmann; Orlin and Vande Vate; and Orlin. The approach in this talk has its roots in that of Gabow and Stallmann, and replaces the need of linear algebraic computation by repeated oracle-calls along alternating paths. Thus one can recover a combinatorial structure similar to that in the algorithm of Gabow and Stallmann.

Tuesday, September 29

14:30 - 15:15 Stefan Hougardy: The Subtour LP and Algorithms for the TSP

Abstract: We present a combinatorial O(n2 log n) algorithm for proving that certain edges of a TSP instance cannot occur in any optimum TSP tour. We also present results on the integrality ratio of the subtour LP and on the approximation ratio of heuristic algorithms for the TSP.

15:15 - 16:00 Corinna Gottschalk: Additive Stabilizers for Unstable Graphs

Abstract: A graph G is called stable if the cardinality of a maximum matching equals the size of a minimum fractional vertex cover. The notion of stable graphs naturally extends to the weighted setting: a weighted graph (G,w) is stable if it admits a matching M and a fractional w-vertex cover y such that w(M) = y(V). Stable graphs play a significant role in combinatorial games. For example, stable graphs are exactly those graphs with non-empty core in the associated assignment games (introduced by Shapley and Shubik).

We investigate the problem of stabilizing, at minimum cost, an unstable unit-weight graph by increasing some of the edge-weights. We give a combinatorial polynomial time algorithm to solve the problem in factor-critical graphs. For general graphs, we show the existence of a half-integral optimal stabilizer, and exploit the half-integrality to show that the problem is strongly NP-hard. Further, we present an algorithm to compute an optimal additive stabilizer whose running time is exponential only in the size of the Tutte set in the Gallai-Edmonds decomposition. Thus, our algorithm is a fixed parameter algorithm where the parameter is the size of the Tutte set.

Tuesday, October 13

14:30 - 15:15 Ágnes Cseh: Popular matchings

Video recording

Abstract: 
We are given a bipartite graph where each vertex has a strict preference list ranking its neighbors. A matching M is stable if there is no unmatched pair ab, so that a and b both prefer each other to their partners in M.  A matching M is popular if there is no matching M' such that the number of vertices that prefer M' to M exceeds the number of vertices that prefer M to M'. While stable matchings capture the notion of local optimality, popular matchings provide a useful alternative to the well-studied stable matchings in the collective setting, aiming at the welfare of an entire group. In this talk we will present some basic complexity and algorithmic results on popular matchings and also show new structural properties that build a bridge between the sets of stable and popular matchings in an instance. 

Joint work with Kavitha Telikepalli. 

15:15 - 16:00 Elisabeth Finhold: Circuit Diameters

Video recording

Abstract: In this talk we introduce the circuit diameter of a polyhedron as a generalization to the classical combinatorial diameter: instead of counting the number of edges needed to connect any two vertices of the polyhedron we investigate how many steps we need when going along circuits, the potential edge directions of the polyhedron. In particular, this diameter concept allows to pass through the interior of the polyhedron.
 
We first present several notions of circuit diameters that arise from certain restrictions on the circuit walks, resulting in a whole hierachy of circuit diameters that includes the combinatorial diameter. In the second part of the talk we then investigate and constrast these diameters for transportation polytopes and dual network flow polyhedra.


Friday, October 16

15:00 - 16:00 Neil Olver: Towards a generalization of the VPN Conjecture


Abstract: Robust network design considers the problem of designing a minimum-cost network to support some given set of demand patterns. This can be used to model the variability of traffic across a network. Particularly well studied in this context is the VPN problem, which essentially asks that the network support all possible fractional matchings. This was shown to be polynomially solvable (Goyal, O., Shepherd ’08). However, a very natural generalization of this “VPN Conjecture” remains open. Roughly speaking, the idea is to require that the network support all demands that can be routed on some given capacitated tree; if the tree is a star, the VPN problem is recovered. We propose a specific polynomial time algorithm, which we conjecture always returns an optimal solution.

In this talk, I will discuss a new simplification of the proof of the VPN Conjecture, and how this leads to some progress on the generalization. T-joins will play the starring role.



Joint work with Michel Goemans, Yusuke Kobayashi and Rico Zenklusen.

Tuesday, October 27

15:00 - 16:00 Ahmad Abdi: Studying Edmonds' trick of bi-directing edges

Abstract:  A classical well-studied problem in combinatorial optimization is that of finding a minimum weight spanning tree of a graph. An optimization-oriented approach is to use the cut linear programming (LP) relaxation, except that this LP is quite weak, outputting fractional optimal solutions even on a triangle. In 1967, Edmonds came up with a nifty trick to avoid this issue: bi-direct the edges and instead, in an equivalent manner, look for a minimum weight arborescence. He showed that the corresponding bi-directed cut LP relaxation is strong, in the sense that it can always output an integral optimal solution.

In this talk, I will generalize Edmonds' operation to any set covering LP, and show how this operation always makes the LP stronger. We will survey the different attributes of this operation, and along the way, we make some beautiful connections to the extension complexity of vertex covers, Steiner trees, projective planes and binary clutters. I will end with some open questions on the packing property and on matroid bases.

Joint work with Ricardo Fukasawa and Laura Sanità.

Friday, October 30

14:30 - 15:15 Jochen Könemann: Approximating Prize Collecting Steiner Problems

Abstract: In the prize collecting Steiner forest problem (PCSF) we are given an undirected graph G=(V,E), non-negative edge costs ce for all e in E, k terminal pairs (s1,t1), ..., (sk,tk), and non-negative penalties π1, ..., πk. Define the penalty π(F) of a forest F as the penalty of pairs that are not connected by F. The goal is then to find a forest F that minimizes c(F) + π(F). PCSF is APX-hard and admits an efficient 2.54-approximation algorithm. In this talk I will first recap known results for the approximation of PCSF and its tree special case. Subsequently, I will study the extreme-point structure of a natural polyhedral description of the problem, and develop an iterative rounding-based 3-approximation.

Joint work with: F. Grandoni, T. Rothvoss, G. Schaefer & C. Swamy

15:15 - 16:00 Marco Di Summa: Cut generating functions

Abstract: The theory of cut generating functions is a tool for deriving automatically cutting planes in mixed integer programming. In this talk I will present the basic ideas of this theory and illustrate how it leads to the study of subadditive functions. In particular, we discuss the importance of subadditive functions that are piecewise linear and the number of slopes of these functions. We present some new results and pose several open questions in this field.

Joint work with Amitabh Basu, Michele Conforti, Joseph Paat.

Tuesday, November 3

15:00 - 16:00 William Cook: Traveling Salesman Problems

Abstract: We discuss open research questions surrounding the traveling salesman problem. The focus will be on topics having potential impact on the computational solution of large-scale problem instances.

Thursday, November 5

15:00 - 16:00 Michele Conforti: Maximal S-free convex sets and the Helly number

Abstract: Given a subset S of ℝd, the Helly number h(S) is the largest size of a minimal family of convex sets whose intersection is disjoint from S. A convex set is S-free if its interior contains no point in S. The parameter f(S) is the largest number of maximal faces in an inclusionwise maximal S-free convex set. We study the relation between the parameters h(S) and f(S). Our motivation stems from the study of minimally infeasible integer programs and the theory of cut-generating functions. This is joint work with Marco Di Summa.

Tuesday, November 10

15:00 - 16:00 Hyung-Chan An: LP-Based Algorithms for Capacitated Facility Location

Abstract: Linear programming has played a key role in the study of algorithms for combinatorial optimization problems. In the field of approximation algorithms, this is well illustrated by the uncapacitated facility location problem. A variety of algorithmic methodologies, such as LP-rounding and primal-dual method, have been applied to and evolved from algorithms for this problem. Unfortunately, this collection of powerful algorithmic techniques had not yet been applicable to the more general capacitated facility location problem. In fact, all of the known algorithms with good performance guarantees were based on a single technique, local search, and no linear programming relaxation was known to efficiently approximate the problem.

In this paper, we present a linear programming relaxation with constant integrality gap for capacitated facility location. We demonstrate that the fundamental theories of multi-commodity flows and matchings provide key insights that lead to the strong relaxation. Our algorithmic proof of integrality gap is obtained by finally accessing the rich toolbox of LP-based methodologies: we present a constant factor approximation algorithm based on LP-rounding.

This is joint work with Mohit Singh and Ola Svensson.

Thursday, November 12

14:30 - 15:15 Yuri Faenza: Knapsack: a tale of two polytopes

Abstract: Max knapsack is a classical optimization problem: given a threshold W and n objects, each coming with a weight and a profit/cost, find, among the collections of objects of total weight at most W, that of maximum profit. In its twin brother, the min knapsack, we are interested in the collections of total weight at least W, and ask for the one minimizing the cost.Algorithmically, both problems are well-understood, as they are NP-hard and admit fully polynomial time approximation schemes. Polyhedrally, the task of describing or approximating the convex hull of knapsack solutions is less understood. We will review results in this area, and present new ones as well as open problems.

15:15 - 16:00 Jannik Matuschke: Robust matchings

Abstract: The following zero-sum game is played on a weighted graph: Alice selects a matching M and Bob selects a number k. Then, Alice receives a payoff equal to the ratio of the weight of the k heaviest edges of M to the maximum weight of a matching of size at most k in the graph. If M guarantees a payoff of at least L then it is called L-robust. In 2002, Hassin and Rubinstein gave an algorithm that returns a sqrt(1/2)-robust matching, which is best possible. In this talk, we will see that Alice can improve on the guarantee of sqrt(1/2) when allowing her to play a randomized strategy. We devise a simple algorithm that returns a 1/ln(4)-robust randomized matching, based on the following observation: If all edge weights are integer powers of 2, then a lexicographically optimum matching is 1-robust. We prove this property not only for matchings but for a very general class of independence systems that includes matroid intersection, b-matchings, and strong 2-exchange systems. We also show that our robustness results for randomized matchings translate to an asymptotic robustness guarantee for deterministic matchings: When restricting Bob's choice to cardinalities larger than a given constant, then Alice can find a single deterministic matching with approximately the same guaranteed payoff as in the randomized setting.

This is joint work with Martin Skutella and José A. Soto.

Tuesday, November 24

15:00 - 16:00 Gérard P. Cornuéjols: Cut-Generating Functions for Integer Variables

Abstract: For an integer linear program, Gomory’s corner relaxation is obtained by ignoring the nonnegativity of the basic variables in a tableau formulation. In this talk, we do not relax these nonnegativity constraints. We generalize a classical result of Gomory and Johnson characterizing minimal cut-generating functions in terms of subadditivity, symmetry, and periodicity.

The talk is based on joint work with Sercan Yýldýz.

Wednesday, Novbember 25

15:00 - 16:00 Daniel Dadush: New Conjectures on the Geometry of Numbers

Thursday, November 26

14:30 - 15:15 Chaitanya Swamy: LP-based Techniques for Minimum-Latency Problems

Abstract: Minimum-latency problems (MLPs) are vehicle-routing problems where we seek to plan routes for a fixed fleet of vehicles starting from one or more given depots so as to visit an underlying set of clients; the goal is to minimize the sum of the client waiting times (i.e., latencies). This class of problems have been widely studied in both the OR and CS literature, but various gaps in our understanding of these problems. A fundamental difficulty stems from our lack of understanding of linear-programming (LP) relaxations for these problems. This talk will describe some LP-based techniques for MLPs which are versatile enough to handle various minimum-latency style problems, and also powerful enough to yield the current-best constant-factors for these problems. The techniques show how to exploit so-called configuration LPs and bidirected LP-relaxations, which are two classes of LPs that are often believed to be quite powerful (in theory and practice), but have only sporadically been leveraged for vehicle-routing problems.

The talk will be self-contained.

15:15 - 16:00 Karthik Chandrasekaran: Local Testing for Membership in Lattices

Abstract: In this work, we initiate the study of local testing for membership in lattices: given a lattice and a target vector as input, the goal is to verify whether the target vector belongs to the lattice or is far from all lattice points. Due to the high-dimensional nature of some applications, we would like to achieve this goal by querying and examining only a small subset of coordinates of the target vector.

  • We show that adaptively querying tests do not save much on the number of queries in comparison to non-adaptively querying tests. We also show that it is sufficient to design canonical non-adaptive tests for achieving low query complexity.
  • We prove lower and upper bounds on the query complexity of local tests for membership in code-formula lattices constructed from error-correcting codes. Instantiating these results for code formula lattices constructed from Reed-Muller codes shows matching lower and upper bounds on the query complexity of testing these lattices.

Based on joint work with Mahdi Cheraghchi, Venkata Gandikota and Elena Grigorescu.

Monday, November 30

15:00 - 16:00 Corinna Gottschalk, Jens Vygen: Better s-t-Tours by Gao Trees

Thursday, December 3

14:30 - 15:15 Sara Ahmadian: Local-Search based Approximation Algorithms for Mobile Facility Location Problem

Abstract: We consider the mobile facility location (MFL) problem. We are given a set of facilities and clients located in a common metric space G = (V, c). The goal is to move each facility from its initial location to a destination (in V) and assign each client to the destination of some facility so as to minimize the sum of the movement costs of the facilities and the client-assignment costs. We give the first local-search based approximation algorithm for this problem and achieve the best-known approximation guarantee. Our main result is (3 + ε)-approximation for this problem for any constant ε > 0 using local search. The previous best guarantee for MFL was an 8-approximation algorithm due to Friggstad and Salavatipour based on LP-rounding. There is an approximation-preserving reduction from the k-median problem to MFL and our guarantee matches the best-known local-search based approximation guarantee for the k-median problem. Furthermore, our analysis is tight (up to o(1) factors) since the tight example for the local-search based 3-approximation algorithm for k-median can be easily adapted to show that our local-search algorithm has a tight approximation ratio of 3.

15:15 - 16:00 Jesus De Loera: Carathéodory's theorem after a 100 years: new results and applications to Optimization and Game Theory

Abstract: Given a system Ax=b, x ≥ 0, what is the size of the sparsest solution? Can one provide good bounds? Carathéodory’s theorem states that one can find a solution with no more than d non-zero entries. In this talk I will review many variations of this theorem that have interesting applications in combinatorial optimization and game theory. In particular I discuss new discrete and integral versions that improve prior results and show some of the fascinating structure of sparse integer solutions of Diophantine equations. Joint work with Iskander Aliev (Cardiff Univ.) and Chris O'Neill (Texas A&M).

Tuesday, December 8

15:00 - 16:00 Bernhard von Stengel: Computational Challenges for Non-Cooperative Games

Abstract: This talk gives an overview of personal views - and results - on equilibrium computation for noncooperative games. The strategic (tabular) and extensive (tree) form provide basic models of interaction that are widely used in economics. With such a game as input, one is typically interested in finding one or all of the Nash equilibria of the game. These are best viewed as combinatorial-geometric problems, represented by polytopes for games with two players. Nash equilibria are the endpoints of "complementary" pivoting paths. The two endpoints have opposing "signs" related to the index of a fixed point, and to properties of dynamic and strategic stability of the equilibrium. The oriented paths define the complexity class PPAD (polynomial parity argument with direction).

Finding one Nash equilibrium of a two-player game has been shown by Chen and Deng, and Daskalakis, Goldberg, and Papadimitriou, to be PPAD-complete, meaning that other path-following arguments (such as Sperner's lemma or Scarf's lemma, as used to prove the existence of a fixed point or equilibrium), can be reduced to it. Savani and von Stengel have constructed games with exponentially long paths. However, I consider this behaviour to be the exception, just as recent PSPACE-hardness results by Fearnley and Savani do not doom the simplex algorithm for linear programming. The - probably good - average behaviour of equilibrium path-following needs to be better understood. In addition, there is a dearth of serious applications of large-scale games compared to the proposed solution algorithms.

Thursday, December 10

15:00 - 16:00 Ruta Mehta: Equilibrium computation in two-player non-cooperative games

Abstract: Finding Nash equilibrium in two-player normal form game (2-Nash) is one of the most extensively studied problem. Such a game can be represented by two payoff matrices A and B, one for each player. 2-Nash is PPAD-complete in general, while in case of zero-sum games (B=-A) the problem reduces to LP and hence is in P. I will present the Lemke-Howson algorithm, a classical method to solve general 2-Nash problem. It is exponential in the worst case however runs fast in practice, and yields a number of structural results on index, counting, and complexity.

Towards efficient computation, extending the notion of zero-sum, in 2005, Kannan and Theobald defined rank of game (A, B) as rank(A+B), e.g., rank-0 are zero-sum games. They gave an FPTAS for constant-rank games and asked for an efficient algorithm for exact computation, where the primary difficulty was disconnected solution set, even in rank-1 games. I will answer this question affirmatively for rank-1 games. For rank-2 and more the problem turns out to be PPAD-complete.