Schedule of the Workshop "Evolutionary Dynamics and Market Behavior"

Monday, July 15

 09:00 - 09:45 William Sandholm: Large Deviations and Stochastic Stability in the Small Noise Double Limit 09:45 - 10:30 Mathias Staudigl: Large Deviations and Stochastic Stability in the Large Population Limit 10:30 - 11:00 Coffee 11:00 - 11:45 Ryoji Sawa: Stochastic stability in large population and small mutation limits for general normal form games 11:45 - 12:30 Tymon Tatur: Evolution in isolated populations 12:30 - 14:30 Lunch break 14:30 - 15:15 Carlos Alos-Ferrer: Stochastic Learning, Trader Matching, and the Selection of Market Platforms 15:15 - 16:00 Jun Honda: A simple formula for stochastically stable states in the logit response dynamics 16:00 - 16:30 Coffee

Tuesday, July 16

 09:00 - 09:45 Christoph Kuzmics: Evolution of taking roles 09:45 - 10:30 Frank Thuijsman: Network characteristics and efficient coordination 10:30 - 11:00 Coffee 11:00 - 11:45 Cheng Wan: A dynamical model of a two-level competition 11:45 - 12:30 Alberto Pinto: Nash equilibrium for a Hotelling Town 12:30 - 14:30 Lunch break 14:30 - 15:15 Itai Arieli: Learning Dynamics and Speed of Convergence in Population Games 15:15 - 16:00 Yannick Viossat: No-regret dynamics and fictitious play 16:00 - 16:30 Coffee

Wednesday, July 17

 09:00 - 09:45 Panayotis Mertikopoulos: Entropy-driven game dynamics, quantal responses and Hessian Riemannian structures 09:45 - 10:30 Rida Laraki: Higher Order Game Dynamics 10:30 - 11:00 Coffee 11:00 - 11:45 Ratul Lahkar: Continuous Logit Dynamic and Price Dispersion 11:45 - 12:30 Marius Ochea: Heterogeneous Heuristics in 3x3 Bimatrix Population Games 12:30 - 14:30 Lunch break 14:30 - 15:15 Dai Zusai: Best response dynamics in multi-task environments 15:15 - 16:00 Yezekael Hayel: Markov Decision Evolutionary Games: Theory and Applications 16:00 - 16:30 Coffee

Thursday, July 18

 09:00 - 09:45 Lorens Imhof: Fast selection in finite populations 09:45 - 10:30 Mario Bravo: Reinforcement learning with restrictions on the action set 10:30 - 11:00 Coffee 11:00 - 11:45 Zibo Xu: Convergence of best response dynamics in extensive-form games 11:45 - 14:30 Lunch break 14:30 - 15:15 Mathieu Faure: Quasi-stationary distributions for randomly perturbed dynamical systems 15:15 - 16:00 Gregory Roth: Stochastic persistence of interacting structured populations 16:00 - 16:30 Coffee

Mario Bravo: Reinforcement learning with restrictions on the action set Abstract (with Mathieu Faure)

Abstract: Consider a 2-player normal-form game repeated over time. We introduce an adaptive learning procedure, where the players only observe their own realized payoff at each stage. We assume that agents do not know their own payoff function, and have no information on the other player. Furthermore, we assume that they have restrictions on their own action set such that, at each stage, their choice is limited to a subset of their action set. We prove that the empirical distributions of play converge to the set of Nash equilibria for zero-sum and potential games, and games where one player has two actions.

Top

Rida Laraki: Higher Order Game Dynamics
(Joint work with Panayotis Mertikopoulos) https://sites.google.com/site/ridalaraki/

Abstract: Continuous-time game dynamics are typically first order systems where payoffs determine the growth rate of the players' strategy shares. In this paper, we investigate what happens beyond first order by viewing payoffs as higher order forces of change, specifying e.g. the acceleration of the players' evolution instead of its velocity (a viewpoint which emerges naturally when it comes to aggregating empirical data of past instances of play). To that end, we derive a wide class of higher order game dynamics, generalizing first order imitative dynamics, and, in particular, the replicator dynamics. We show that strictly dominated strategies become extinct in $n$-th order payoff-monotonic dynamics $n$ orders as fast as in the corresponding first order dynamics; furthermore, in stark contrast to first order, weakly dominated strategies also become extinct for $n\geq2$. All in all, higher order payoff-monotonic dynamics lead to the elimination of weakly dominated strategies, followed by the iterated deletion of strictly dominated strategies, thus providing a dynamic justification of the well-known epistemic rationalizability process of Dekel and Fudenberg (1990). Finally, we also establish a higher order analogue of the folk theorem and we show that convergence to strict equilibria in $n$-th order dynamics is $n$ orders as fast as in first order.

Top

Panayotis Mertikopoulos: Entropy-driven game dynamics, quantal responses and Hessian Riemannian structures

Abstract: A key property of the replicator dynamics is their equivalence to the exponential weight algorithm in continuous time, i.e. the learning process in which players select an action with probability exponentially proportional to the action's aggregate payoff over time. By considering a more general quantal response framework which extends the logit model above, we derive a new class of entropy-driven game dynamics that is intimately related to the notion of a Hessian Riemannian metric on the simplex (viz. a metric obtained by taking the Hessian of an entropy-like function). In this setting, dominated strategies become extinct and the folk theorem of evolutionary game theory continues to hold, but the rate of extinction of dominated strategies (or of convergence to strict equilibria) changes with the entropy used. In the case of random matching in two player games, we show that time averages converge if the dynamics are permanent, and we establish a sufficient condition for permanence. If there is time, we will also discuss the global properties of these dynamics in potential, contractive and conservative games, and we will explore some links with smooth best-reply dynamics.

Top

Marius Ochea: Heterogenous Heuristics in 3x3 Bimatrix Population Games

Abstract: We numerically investigate population-level evolutionary dynamics resulting from individual-level, adaptive play both under homogenous ( "self-play") and heterogenous ( "mixed play") scenarios. In a class of bimatrix 3x3 normal form games (Sparrow and van Strien 2009, GEB), that includes Rock-Paper-Scissors as a special case, rich limit behavior unfolds as certain heuristics ' parameters are modulated.

Top

Gregory Roth: "stochastic persistence of interacting structured populations" (with Sebastian Schreiber).

Abstract: I will illustrate our result through an evolutionary rock-scissors-paper game with some spatial structure in the population.

Top

Ryoji Sawa: Stochastic stability in large population and small mutation limits for general normal form games

Abstract: We consider models of stochastic evolution in normal form games with K ≥ 2 strategies in which agents employ best response with mutations. Forming the dynamic as a Markov chain with state space being the set of best responses, we overcome the technical difficulty which arises with the large population. We study the long run behavior of the resulting Markov process for four limiting cases: the small noise limit, the large population limit, and the both orders of double limits. We characterize conditions with which selection results of the both orders of double limits coincide.

Top