Schedule of the Game Theory Workshop
Monday, December 14
10:00 - 10:30 | Registration & Welcome coffee |
10:30 - 11:30 | Lightning Talks |
11:30 - 12:20 | Chaitanya Swamy: Signaling in Bayesian Games |
12:20 - 14:40 | Lunch break |
14:40 - 15:00 | Aviad Rubinstein: ETH-Hardness for Symmetric Signaling in Zero-Sum Games |
15:00 - 15:20 | Yun Kuen (Marco) Cheung: Asynchronous Tatonnement and Coordinate Descent |
15:20 - 15:40 | Kousha Etessami: The complexity of computing a (quasi-)perfect equilibrium for n-player extensive form games of perfect recall |
15:40 - 16:00 | Jugal Garg: ETR-Completeness for Decision Versions of Multi-Player (Symmetric) Nash Equilibria |
16:00 - 16:30 | Tea and cake |
16:30 - 16:50 | Agnes Cseh: Paths to stable allocations |
16:50 - 17:10 | Linda Farczadi: The nucleolus of matching games |
17:10 - 17:30 | Matthias Mnich: Stable Marriage with Covering Constraints – A Complete Computational Trichotomy |
20:00 - | Concert at the Arithmeum (Lennéstrasse 2): Leipziger Querflötenensemble Quintessenz - "Engelsklänge" |
Tuesday, December 15
9:40 - 10:30 | Rahul Savani: Polymatrix Games: Algorithms and Applications |
10:30 - 11:00 | Coffee break and group photo |
11:00 - 11:20 | Guido Schäfer: Efficient Equilibria in Polymatrix Coordination Games |
11:20 - 11:40 | Alexander Skopalik: Multilevel Network Games |
11:40 - 12:00 | Tobias Harks: Uniqueness of Equilibria in Atomic Splittable Polymatroid Congestion |
12:00 - 14:10 | Lunch break |
14:10 - 15:00 | Edith Elkind: Coalitional Games on Sparse Social Networks |
15:00 - 16:00 | Open problems |
16:00 - 16:30 | Tea and cake |
16:30 - 16:50 | Dominik Peters: Structural Tractability in Hedonic Games |
16:50 - 17:10 | Tamás Király: Complexity of finding equilibria in linear service-providing games |
17:10 - 17:30 | Rachel Cummings: The Strange Case of Privacy in Equilibrium Models |
afterwards | Reception |
Wednesday, December 16
9:40 - 10:30 | Kurt Mehlhorn: An improved combinatorial algorithm for linear Arrow-Debreu Markets and its implementation |
10:30 - 11:00 | Coffee break |
11:00 - 11:20 | Martin Hoefer: Efficient Algorithms for Unknown Markets |
11:20 - 11:40 | Adrian Vetta: On the Welfare of Ascending Auctions |
11:40 - 12:00 | Georgios Piliouras: What Nash did not know: Modern Topology and Game Theory |
12:00 - 14:30 | Lunch break |
14:30 - 15:10 | Ilan Adler: Economic-Combinatorial Games |
15:10 - 16:00 | Ruta Mehta: A Market for Scheduling, with Applications to Cloud Computing |
16:00 - 16:30 | Tea and cake |
16:30 - 16:50 | John Fearnley: Distributed Methods for Finding Approximate Equilibria |
16:50 - 17:10 | Umang Bhaskar: On the Existence of Low-Rank Explanations for Mixed Strategy Behavior |
17:10 - 17:30 | Vahab Mirrokni: Coordination Mechanisms and Price of Anarchy via Fenchel Duality |
Thursday, December 17
9:40 - 10:30 | Paul Dütting: Truthful Outcomes from Non-Truthful Position Auctions |
10:30 - 11:00 | Coffee break |
11:00 - 11:20 | Elisa Celis: Constrained Online Auctions |
11:20 - 11:40 | Zsuzsanna Jankó: Trading Networks with Bilateral Contracts |
11:40 - 12:00 | Neil Olver: Easy or Selfish Scheduling on Related Machines |
12:00 - 15:00 | Lunch break |
15:00 - 15:20 | Matthias Feldotto: Existence and Approximation of Pure Nash Equilibria in Bandwidth Allocation Games |
15:20 - 15:40 | Thomas Kesselheim: Algorithms as Mechanisms: The Price of Anarchy of Relax-and-Round |
15:40 - 16:00 | Martin Gairing: Cost-Sharing in Congestion Games |
16:00 - 16:30 | Tea and cake |
17:00 - 18:00 | Arithmeum tour (We walk together from HIM, leaving at 16:45.) |
Abstracts
(Underlined titles can be clicked for the video recording)
Our main goal is to formulate and analyze a class of combinatorial games where the right to decide at who will make the next move is determined by playing a two-person zero-sum game. In particular, the payoff matrix is changed from turn to turn based on the two players' actions in the zero-sum game. We propose several schemes for such games related to both partizan and non-partizan combinatorial games. We show that certain assumptions, regarding the nature of the combinatorial games and zero-sum games payoff, lead to generalizations of known cases involving bidding for the right to move next. We also present several examples of such games where the proposed schemes lead to interesting analyses of the optimal strategies and insight into the computational complexity of finding such strategies.
Joint work with Aviezri S. Fraenkel and Sam Payne.
Umang Bhaskar: On the Existence of Low-Rank Explanations for Mixed Strategy Behavior
Nash equilibrium is a model to explain the observed behavior of strategic agents. Given observed player behavior in a strategic setting, the model fits if there exist payoffs for the players for which the equilibrium corresponds to observed player behavior. However, if the payoffs that explain observed player behavior requires players to have solved a computationally hard problem, then the explanation provided is questionable. Thus, the computational complexity of the game that rationalizes observed behavior is clearly of crucial importance.
We study the fundamental case of two-player zero-sum games, and obtain conditions under which observed behavior of players can be explained by games in which Nash equilibria are easy to compute. We identify three structural conditions and show that if the observed behavior satisfies any of these conditions, then it can be explained by payoff matrices for which Nash equilibria are efficiently computable. Our conditions admit large and structurally complex data sets of observed behavior.
This is joint work with Siddharth Barman, Federico Echenique, and Adam Wierman.
Elisa Celis: Constrained Online Auctions
In a seminal work, Myerson characterized revenue-optimal auctions in Bayesian settings. This mechanism has constraints on feasible allocations, but otherwise can serve agents arbitrarily, including always (or never) serving a particular one, regardless of the realization of the agent values. Instead, we consider auctions in which the probability that agent i wins is constrained to be within a nonempty set S in [0,1]. We prove that the optimal constrained auction is Myerson's auction when the virtual values are shifted by a vector. We further give an efficient algorithm for finding the optimal auction when the constraints are of the form S = [a,b]. Several very natural problems, e.g., satisfying pre-negotiated contracts, budget constraints, and fair allocations, can be similarly addressed using this approach.
In this talk, I will first discuss some features observed from real markets, and a prior work with Cole and Devanur that shows equivalence between tatonnement, a natural price dynamic, and gradient descent in a subclass of markets. These add up to motivate us to study asynchronous tatonnement, and further extend our results to asynchronous coordinate descent, which has recently found applications in large-scale optimization and machine learning. I will discuss our model and state our main results.
At the end of this talk, I will also propose some plausible future research.
This is a joint work with Richard Cole.
The stable allocation problem is one of the broadest extensions of the well-known stable marriage problem. In an allocation problem, edges of a bipartite graph have capacities and vertices have quotas to fill. Here we investigate the case of uncoordinated processes in stable allocation instances. In this setting, a feasible allocation is given and the aim is to reach a stable allocation by raising the value of the allocation along blocking edges and reducing it on worse edges if needed. Do such myopic changes lead to a stable solution? In our present work, we analyze both better and best response dynamics from an algorithmic point of view. With the help of two deterministic algorithms we show that random procedures reach a stable solution with probability one for all rational input data in both cases. Surprisingly, while there is a polynomial path to stability when better response strategies are played (even for irrational input data), the more intuitive best response steps may require exponential time.
Joint work with Martin Skutella.
We study how privacy technologies affect user and advertiser behavior in a simple economic model of targeted advertising. In our model, a consumer first decides whether or not to buy a good, and then an advertiser chooses an advertisement to show the consumer. The consumer's value for the good is correlated with her type, which determines which ad the advertiser would prefer to show to her — and hence, the advertiser would like to use information about the consumer's purchase decision to target the ad that he shows.
In our model, the advertiser is given only a differentially private signal about the consumer's behavior — which can range from no signal at all to a perfect signal, as we vary the differential privacy parameter. This allows us to study equilibrium behavior as a function of the level of privacy provided to the consumer. We show that this behavior can be highly counter-intuitive, and that the effect of adding privacy in equilibrium can be completely different from what we would expect if we ignored equilibrium incentives. Specifically, we show that increasing the level of privacy can actually increase the amount of information about the consumer's type contained in the signal the advertiser receives, lead to decreased utility for the consumer, and increased profit for the advertiser, and that generally these quantities can be non-monotonic and even discontinuous in the privacy level of the signal.
Joint work with Katrina Ligett, Mallesh Pai, and Aaron Roth.
Paul Dütting: Truthful Outcomes from Non-Truthful Position Auctions
Adding to a long list of properties that explain why the VCG mechanism is rarely used in practice, we exhibit a relative lack of robustness to inaccuracies in the choice of its parameters. For a standard position auction environment in which the auctioneer may not know the precise relative values of the positions, we show that under both complete and incomplete information a non-truthful mechanism supports the truthful outcome of the VCG mechanism for a wider range of these values than the VCG mechanism itself. The result for complete information concerns the generalized second-price mechanism and lends additional support to Google=B9s use of this mechanism for ad placement. Particularly interesting from a technical perspective is the case of incomplete information, where a surprising combinatorial equivalence helps us to avoid confrontation with an unwieldy differential equation.
We consider coalitional games played on social networks (graphs), where feasible coalitions are associated with connected subsets of agents. We characterize families of graphs that have polynomially many feasible coalitions, and show that the complexity of computing common solution concepts and parameters of coalitional games on social networks is polynomial in the number of feasible coalitions. Also, we establish a connection between coalitional games on social networks and the synergy coalition group representation, and provide new complexity results for this representation. In particular, we identify a variant of this representation where computing the cost of stability is easy, but computing the value of the least core is hard.
We study the complexity of computing/approximating several classic refinements of Nash equilibrium for n-player extensive form games of perfect recall (EFGPR), including perfect, quasi-perfect, and sequential equilibrium. We show that, for all of these refinements, approximating one such equilibrium within desired distance, for any EFGPR, is no harder than approximating a Nash equilibrium of a 3-player normal form game within desired distance: namely it is FIXP_a-complete. We show this via new algebraic fixed point characterizations of epsilon-(quasi-)perfect equilibria, and by applying some real algebraic geometry. This builds on earlier work together with Hansen, Miltersen, and Sorensen, on approximating a perfect equilibrium of an n-player normal form game. We also define, for delta > 0 and epsilon > 0, the notion of a "delta-almost epsilon-(quasi-)perfect equilibrium" for EFGPRs, we show that computing one of these is PPAD-complete, and that these suitably refine delta-almost-subgame-perfect equilibrium for EFGPRs.
In a matching game we are given a weighted graph (G,w) and we have a total value equal to the maximum weight of a matching of G to allocate among the vertices of G. Given any such allocation the excess of a subset of vertices S is defined as the difference between the total value allocated to the vertices of S and the weight of the maximum weight matching in the subgraph of G induced by S. The least core is the set of allocations that maximize the minimum excess of any subset of vertices. We study the problem of obtaining a linear programming formulation for the least core using a polynomial number of constraints. This problem has been solved for matching games with non-empty core and matching games with node weights. In this talk we present recent results for solving the problem on a new class of instances with general weights. We will also discuss how solving this problem can lead to a polynomial time algorithm for computing the nucleolus of matching games.
John Fearnley: Distributed Methods for Finding Approximate Equilibria
We study the problem of finding approximate Nash equilibria in bimatrix games. The previous best known method for computing approximate well supported Nash equilibria (WSNE) finds a 0.6608-WSNE in polynomial time. In this paper, we propose a new method that always finds a 0.6528-WSNE in polynomial time. In contrast to previous approaches that solve a single LP that
combines the two players payoffs, this algorithm solves two independent LPs, each of which is derived from one of the two payoff matrices. This distributed approach allows us to provide variants of the algorithm that are communication efficient and query efficient, and so we beat the best known algorithms in these settings. Our algorithm can also be adapted to provide the best known communication efficient algorithm for finding approximate Nash equilibria.
Joint work with Artur Czumaj, Argyrios Deligkas, Michail Fasoulakis, Marcin Jurdzinski, and Rahul Savani.
Matthias Feldotto: Existence and Approximation of Pure Nash Equilibria in Bandwidth Allocation Games
In bandwidth allocation games (BAGs), the strategy of a player consists of various demands on different resources. The player’s utility is at most the sum of these demands, provided they are fully satisfied. Every resource has a limited capacity and if it is exceeded by the total demand, it has to be split between the players. Since these games do not have pure Nash equilibria in general, we take a closer look at some restricted classes of these games. On the one hand we show some classes where pure Nash equilibria exist, on the other hand we show the existence of approximate pure Nash equilibria for other ones.
Martin Gairing: Cost-Sharing in Congestion Games
This work studies the price of anarchy and the price of stability of cost-sharing methods in weighted congestion games. We require that our cost-sharing method and our set of cost functions satisfy certain natural conditions and we present general tight price of anarchy bounds, which are robust and apply to general equilibrium concepts. We then turn to the price of stability and prove an upper bound for the Shapley value cost-sharing method, which holds for general sets of cost functions and which is tight in special cases of interest, such as bounded degree polynomials. Also for bounded degree polynomials, we close the paper with a somehow surprising result, showing that a slight deviation from the Shapley value has a huge impact on the price of stability. In fact, for this case, the price of stability becomes as bad as the price of anarchy.
As a result of some important works, the complexity of 2-player Nash equilibrium is by now well understood, even when equilibria with special properties are desired and when the game is symmetric. However, for multi-player games, when equilibria with special properties are desired, the only result known is due to Schaefer and Stefankovic (2011): that checking whether a 3-player Nash Equilibrium (3-Nash) instance has an equilibrium in a ball of radius half in l∞-norm is ETR-complete, where ETR is the class Existential Theory of Reals.
Building on their work, we show that the following decision versions of 3-Nash are also ETR-complete: checking whether (i) there are two or more equilibria, (ii) there is an equilibrium in which each player gets at least h payoff, where h is a rational number, (iii) a given set of strategies are played with non-zero probability, and (iv) all the played strategies belong to a given set.
Next, we give a reduction from 3-Nash to symmetric 3-Nash, hence resolving an open problem of Papadimitriou. This yields ETR-completeness for symmetric 3-Nash for the last two problems stated above as well as completeness for the class FIXP_a, a variant of FIXP for strong approximation. All our results extend to k-Nash, for any constant k ≥ 3.
This is a joint work with Ruta Mehta, Vijay V. Vazirani, and Sadra Yazdanbod.
We study uniqueness of Nash equilibria in atomic splittable congestion games and derive a uniqueness result based on polymatroid theory: when the strategy space of every player is a bidirectional flow polymatroid, then equilibria are unique. Bidirectional flow polymatroids are introduced as a subclass of polymatroids possessing certain exchange properties. We show that important cases such as base orderable matroids can be recovered as a special case of bidirectional flow polymatroids. On the other hand we show that matroidal set systems are in some sense necessary to guarantee uniqueness of equilibria: for every atomic splittable congestion game with at least three players and non-matroidal set systems per player, there is an isomorphic game having multiple equilibria. Our results leave a gap between base orderable matroids and general matroids for which we do not know whether equilibria are unique.
(jointly with Veerle Timmermans)
Martin Hoefer: Efficient Algorithms for Unknown Markets
This talk surveys some of our recent work on revealed preference problems in classic market models, when we lack exact knowledge about the number of agents, their individual utilities and endowments. We only rely on queries that return aggregate demand for goods in order to, e.g., learn utility functions or compute a market equilibrium.
As a main result, we design a simple ascending-price algorithm to compute a (1+ε)-approx. equilibrium in Arrow-Debreu exchange markets with the weak gross substitute (WGS) property. It runs in time polynomial in market parameters and log(1/ε). Our algorithm is the first polynomial-time algorithm for this broad class of exchange markets which is easy to implement and avoids heavy machinery such as the ellipsoid method.Beyond the oracle model, we use our approach to design the first polynomial-time algorithm to compute an exact equilibrium for markets with spending constraint utilities, a piecewise linear concave generalization of linear utilities. This resolves an open problem recently posed in the literature.
Zsuzsanna Jankó: Trading Networks with Bilateral Contracts
We consider general networks of bilateral contracts that include supply chains. We define a new stability concept, called trail stability, and show that any network of bilateral contracts has a trail-stable outcome whenever agents’ choice functions satisfy full substitutability. Trail stability is a natural extension of chain stability, but is a stronger solution concept in general contract networks. Trail-stable outcomes are not immune to deviations of arbitrary sets of firms. In fact, we show that outcomes satisfying an even more demanding stability property – full trail stability – always exist. We pin down conditions under which trail-stable and fully trail-stable outcomes have a lattice structure and explore vacancy chain dynamics. We then completely describe the relationships between all stability concepts.
Joint work with Tamás Fleiner, Akihisa Tamura, Alexander Teytelboym.
Thomas Kesselheim: Algorithms as Mechanisms: The Price of Anarchy of Relax-and-Round
Many algorithms that are originally designed without explicitly considering incentive properties are later combined with simple pricing rules and used as mechanisms. The resulting mechanisms are often natural and simple to understand. But how good are these algorithms as mechanisms? Truthful reporting of valuations is typically not a dominant strategy (certainly not with a pay-your-bid, first-price rule, but it is likely not a good strategy even with a critical value, or second-price style rule either). Our goal is to show that a wide class of approximation algorithms yields this way mechanisms with low Price of Anarchy.
The seminal result of Lucier and Borodin [SODA 2010] shows that combining a greedy algorithm that is an alpha-approximation algorithm with a pay-your-bid payment rule yields a mechanism whose Price of Anarchy is O(alpha). In this paper we significantly extend the class of algorithms for which such a result is available by showing that this close connection between approximation ratio on the one hand and Price of Anarchy on the other also holds for the design principle of relaxation and rounding provided that the relaxation is smooth and the rounding is oblivious.
We demonstrate the far-reaching consequences of our result by showing its implications for sparse packing integer programs, such as multi-unit auctions and generalized matching, for the maximum traveling salesman problem, for combinatorial auctions, and for single source unsplittable flow problems. In all these problems our approach leads to novel simple, near-optimal mechanisms whose Price of Anarchy either matches or beats the performance guarantees of known mechanisms.
Joint work with Paul Dütting and Éva Tardos.
A fundamental problem in algorithmic game theory is to determine the hardness of computing equilibria in various classes of games. In this talk we consider an LP-based special case of the Generalized Nash Equilibrium Problem, where the interaction of players is limited to providing services to each other at fixed prices. This enables us to investigate computational complexity as a function of the structure of provider-customer relationships. We show that the problem is PPAD-complete in general, but there is a polynomial-time algorithm if every strong component of the digraph describing the provider-customer relationships is a simple directed cycle.
Kurt Mehlhorn: An improved combinatorial algorithm for linear Arrow-Debreu Markets and its implementation
We present an improved combinatorial algorithm for the computation of equilibrium prices in the linear Arrow-Debreu model. For a market with n agents and integral utilities bounded by U, the algorithm runs in O(n7 log3(nU)) time. This improves upon the previously best algorithm of Ye by a factor of Ω˜(n). The algorithm refines the algorithm described by Duan and Mehlhorn and improves it by a factor of Ω˜(n3). The improvement comes from a better understanding of the iterative price adjustment process, the improved balanced flow computation for nondegenerate instances, and a novel perturbation technique for achieving nondegeneracy. We also report on an ongoing implementation effort.
Joint work with Jugal Garg and Omar Darwish and Ran Duan.
We present a market for allocating and scheduling resources to agents who have specified budgets and need to complete specific tasks. Two important aspects required in this market are: (1) agents need specific amounts of each resource to complete their tasks, and (2) agents would like to complete their tasks as soon as possible. In incorporating these aspects, we arrive at a model that deviates substantially from market models studied so far in economics and theoretical computer science. Indeed, all known techniques developed to compute equilibria in markets in the last decade and half seem not to apply here. We give a polynomial time algorithm for computing an equilibrium using a new technique that is somewhat reminiscent of the ironing procedure used in the characterization of optimal auctions by Myerson. This is inspite of the fact that the set of equilibrium prices could be non-convex; in fact it could have “holes”. Our market model is motivated by the cloud computing marketplace. Even though this market is already huge and is projected to grow at a massive rate, it is currently run in an ad hoc manner.
Joint work with Nikhil Devanur, Jugal Garg, Vijay Vazirani and Sadra Yazdanbod.
We develop a new framework based on LP and Fenchel duality for bounding the robust price of anarchy for a large class of games. We use our framework to give the first PoA bounds for temporal routing games on graphs and energy minimization games in machine scheduling. Most notably, we present the first coordination mechanisms with bounded PoA for temporal routing over general graphs, show a related lower bound result, and an improved bound on the price of stability for this game. Previously, coordination mechanisms with bounded PoA were only known for restricted classes of graphs such as trees or parallel edges. Furthermore, we demonstrate the wide applicability of our framework by giving new proofs of the PoA bounds for three classical games - weighted affine congestion games, competitive facility location games and simultaneous second price auctions. Our price anarchy bounds for these games match the ones known in the literature or obtained using the smoothness framework. All our proofs use the following technique: we first show that for a wide class of games, one can formulate the underlying optimization problem as a linear (or convex) program such that the (Fenchel) dual of the relaxation encodes the equilibrium condition. Further, the dual program has a specific structure with variables for players and resources, which can be naturally interpreted as the cost incurred by the players and the congestion of the resource in an equilibrium outcome. This lets us argue that our definition of dual variables satisfy the dual constraints and using the weak duality theorem we establish the PoA bounds.
We consider stable marriage problems equipped with covering constraints: here the input distinguishes a subset of women, and we seek a matching with fewest number of blocking pairs that matches all of the distinguished women. This generalizes the notion of arranged marriages introduced by Knuth in 1976, in which the partner of each distinguished person is fixed a priori.
Our main result is a complete computational complexity trichotomy of the stable marriage problem with covering constraints, into polynomial-time solvable cases, fixed-parameter tractable cases, and cases that are W[1]-hard, for every choice among a set of natural parameters, namely the maximum length of preference lists for men and women, the number of blocking pairs allowed, and the number of distinguished women. Thereby, we fully answer an open problem of Hamada et al. (ESA 2011).
Joint work with Ildikó Schlotter, Budapest University of Technology and Economics.
Neil Olver: Easy or Selfish Scheduling on Related Machines
We consider a question about the weighted sum of completion times on related machines, one with two equivalent formulations. What is the approximation ratio of the natural greedy list scheduling heuristic? Alternatively, what is the worst-case price of anarchy of the scheduling game in this setting? This is well understood for the case of unrelated machines, but we will discuss an approach (based on convex relaxations) for obtaining better bounds in the related machines case.
Joint work with Jose Correa, Guido Schaefer and Mona Rahn.
Hedonic games are a well-studied model of coalition formation, in which selfish agents are partitioned into disjoint sets, and agents care about the make-up of the coalition they end up in. The computational problem of finding a stable outcome in a hedonic game tends to be computationally intractable, even after severely restricting the types of preferences that agents are allowed to report. We investigate a structural way of achieving tractability, by requiring that agents' preferences interact in a well-behaved manner. Precisely, we show that stable outcomes can be found in linear time for hedonic games that satisfy a notion of bounded treewidth and bounded degree.
Georgios Piliouras: What Nash did not know: Modern Topology and Game Theory
Nash's seminal work on the universality of equilibria in finite games was an ingenious application of fixed point theorems, the most sophisticated result in his era's topology, and ushered forth game theory in its current standard form. In fact, recent algorithmic work has established that Nash equilibria are in fact computationally equivalent to general fixed points both strengthening the connection between game theory and dynamical systems and weakening the predictive value of Nash equilibrium.
But if Nash equilibrium is not the (complete) answer, then what is? Modern fundamental topological results in dynamical systems provide new insights and raise new computational questions.
Joint work with Christos Papadimitriou.
We prove that, assuming the exponential time hypothesis, finding an ε-approximately optimal symmetric signaling scheme in a two-player zero-sum game requires quasi-polynomial time. This is tight by [CCDEHT'15] and resolves an open question of [Dughmi'14]. We also prove that finding a multiplicative approximation is NP-hard.
Polymatrix games are multi-player games that capture pairwise interactions between players. They are defined by an underlying interaction graph, where nodes represent players, and every edge corresponds to a two-player strategic form (bimatrix) game. This talk will be a short survey that will compare and contrast polymatrix and bimatrix games. One of the main messages is that polymatrix games retain many of the nice features of bimatrix games, e.g., having rational equilibria, and the applicability of complementary pivoting methods, such as Lemke's algorithm, to find exact equilibria. Some applications of polymatrix games will also be discussed.
We consider polymatrix coordination games with individual preferences where every player corresponds to a node in a graph who plays with each neighbor a separate bimatrix game with non-negative symmetric payoffs. We review some recent results on α-approximate k-equilibria of these games. In particular, we give a complete characterization of the values of α for which such equilibria exist, derive an almost tight bound on the price of anarchy and settle some complexity questions related to the verification and existence of these equilibria. Finally, we highlight some natural means to reduce the inefficiency of Nash equilibria and pose some interesting open questions.
Joint work with Mona Rahn.
Alexander Skopalik: Multilevel Network Games
We consider a multilevel network game, where nodes can improve their communication costs by connecting to a high-speed network. The nodes are connected by a static network and each node can decide individually to become a gateway to the high-speed network. We study existence and convergence to pure equilibria and derive bounds on the price of anarchy.
We study the optimization problem faced by an informed principal in a Bayesian game, who can reveal some information about the underlying random state of nature to the players (thereby influencing their payoffs) so as to obtain a desirable equilibrium. This yields the following signaling problem: what information should the principal reveal to achieve his goal? This is a natural design question motivated by uncertainty in games and has attracted much recent attention.
In this talk, I will highlight some recent almost-optimal hardness results and some approximation algorithms for Bayesian two-player zero-sum games and Bayesian network routing games. Both these classes admit a canonical, tractable choice of equilibrium, which also decouples the concerns of optimal-signaling computation and equilibrium computation. For Bayesian zero-sum games, when the principal seeks to maximize the equilibrium utility of a player, by exploiting duality and the equivalence of optimization and separation in an unconventional way, we give a simple proof of NP-hardness of obtaining an (additive) FPTAS. We also obtain a PTAS for a structured class of games. For Bayesian routing games, where the goal is to minimize the average latency at equilibrium, we prove a multiplicative 4/3-factor hardness result; this factor is tight for linear latencies since we show that full revelation achieves a 4/3-approximation.
Joint work with Umang Bhaskar, Yu Cheng, and Young Kun Ko.
Adrian Vetta: On the Welfare of Ascending Auctions
We examine welfare guarantees for the two main combinatorial auctions, namely the Simultaneous Multi-Round Auction (SMRA) and the Combinatorial Clock Auction (CCA).