Schedule of the Winter School

Monday, January 11

10:00 - 10:30 Registration & Welcome coffee
10:30 - 10:45 Opening remarks
10:45 - 11:55 Simon Foucart: Essentials of Compressive Sensing (Lecture 1)
12:00 - 13:30 Lunch break
13:30 - 14:40 Karlheinz Gröchenig: Gabor Analysis and its Mysteries (Lecture 1)
14:50 - 16:00 Jean-Luc Starck: Sparse Representations and their Application in Astrophysics (Lecture 1)
16:00 - 16:40 Tea and cake
16:40 - 17:50 Francis Bach: Large-scale Machine Learning and Convex Optimization (Lecture 1)
afterwards Reception

Tuesday, January 12

9:10 - 10:20 Francis Bach: Large-scale Machine Learning and Convex Optimization (Lecture 2)
10:20 - 10:50 Group photo and coffee break
10:50 - 12:00 Stephen Wright: Fundamentals of Optimization in Signal Processing (Lecture 1)
12:00 - 13:30 Lunch break
13:30 - 14:40 Stephen Wright: Fundamentals of Optimization in Signal Processing (Lecture 2)
14:50 - 16:00 Poster session 1
16:00 - 16:40 Tea and cake
16:40 - 17:50 Stephen Wright: Fundamentals of Optimization in Signal Processing (Lecture 3)

Thursday, January 14

9:10 - 10:20 Simon Foucart: Essentials of Compressive Sensing (Lecture 3)
10:20 - 10:50 Coffee break
10:50 - 12:00 Karlheinz Gröchenig: Gabor Analysis and its Mysteries (Lecture 3)
12:00 - 13:30 Lunch break
13:30 - 14:40 Jean-Luc Starck: Sparse Representations and their Application in Astrophysics (Lecture 3)
14:50 - 16:00 Problem session 2 (resp. Poster session 2)
16:00 - 16:40 Tea and cake

Abstracts

Francis Bach: Large-scale Machine Learning and Convex Optimization

Video recording: Lecture 1 Lecture 2 Lecture 3 Lecture 4

Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. Given n observations/iterations, the optimal convergence rates of these algorithms are O(1/√n) for general convex functions and reaches O(1/n) for strongly-convex functions.

In this tutorial, I will first present the classical results in stochastic approximation and relate them to classical optimization and statistics results. I will then show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of O(1/n) without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent.

Slides

Top

Simon Foucart: Essentials of Compressive Sensing

Video recording: Lecture 1 Lecture 2 Lecture 3 Lecture 4

Compressive Sensing has recently had a tremendous impact in science and engineering, because it revealed the theoretical possibility of acquiring structured high-dimensional objects using much less information than previously expected, and more importantly because it provided practical procedures to do so. The foundations of the field rely on an elegant mathematical theory that has matured over the past few years. The goal of the lecture series is to highlight the main aspects of this theory. In particular, the following items will be discussed: the exact reconstruction of sparse vectors; the stable and robust reconstruction of nearly sparse vectors acquired with minor errors; the basic reconstruction algorithms (linear and convex programming, greedy algorithms, thresholding-based strategies); the coherence of a matrix; its restricted isometry constants; random measurement matrices; their optimality; and structure beyond sparsity.

Slides

Top

Karlheinz Gröchenig: Gabor Analysis and its Mysteries

Video recording: Lecture 1 Lecture 2 Lecture 3 Lecture 4

In Gabor analysis one studies the construction and properties of series expansions of functions with respect to a set of time-frequency shifts (phase space shifts) of a single function. Such expansions, which are nowadays called Gabor series, are applied for the analysis and description of speech signals or music signals and play an important role in wireless communications for orthogonal frequency division multiplexing (OFDM). In physics they arise in quantum mechanics as coherent state expansions, in operator theory as non-commutative tori.

In the short course I will use these applications as a motivation for the beautiful mathematics of Gabor analysis.

Preliminary plan:

  • Lecture 1: What is Gabor analysis? Applications and basic concepts
  • Lecture 2: Mathematical structure of Gabor frames and bases. Coarse Duality structure of Gabor frames, characterization of Gabor frames.
  • Lecture 3: Fine structure of Gabor frames, concrete examples
  • Lecture 4: Further aspects and discussion of recent progress and open problems.

Slides (Gabor Frames)

Top

Jean-Luc Starck: Sparse Representations and their Application in Astrophysics

Slides (Part 1)

Slides (Part 2)

Slides (Part 3)

Slides (Part 4)

Top

Stephen Wright: Fundamentals of Optimization in Signal Processing

Video recording: Lecture 1  Lecture 2 Lecture 3

Optimization formulations and algorithms are essential tools in solving problems in signal processing. In these sessions, we survey the most important topics, including first-order methods, regularized optimization, forward-backward methods, stochastic gradient methods, coordinate descent methods, conditional gradient / Frank-Wolfe methods, asynchronous parallel implementations, matrix optimization (including matrix completion), and Augmented Lagrangian methods / ADMM.

Slides (Part 1)

Slides (Part 2)

Slides (Part 3)

Top