Past seminars


You can find below a list of past seminars, with slides when available.

May 5th, 2022, Nicolas Courty (UBS, IRISA)
Title: Optimal Transport for Graph processing
Abstract: In this talk I will discuss how a variant of the classical optimal transport problem, known as the Gromov-Wasserstein distance, can help in designing learning tasks over graphs, and allow to transpose classical signal processing or data analysis tools such as dictionary learning or online change detection, for learning over those types of structured objects. Also, some theoretical and practical aspects will be discussed.
Slides

January 27th, 2022, Julyan Arbel (INRIA)
Title: Understanding Priors in Bayesian Neural Networks at the Unit Level
Abstract: After a short introduction to Bayesian deep learning (that is to say, to Bayesian neural networks), I will present a recent work focused on Bayesian neural networks with Gaussian weight priors and a class of ReLU-like nonlinearities. Such neural networks are well-known to induce an L2, or weight-decay, regularization. Our results characterize a more intricate regularization effect at the level of the unit activations. Our main result establishes that the induced prior distribution on the units before and after activation becomes increasingly heavy-tailed with the depth of the layer. We show that first-layer units are Gaussian, second-layer units are sub-exponential, and units in deeper layers are characterized by so-called sub-Weibull distributions. This provides new theoretical insight on Bayesian neural networks, which we corroborate with experimental simulation results. Joint work with Mariia Vladimirova (Inria), Jakob Verbeek (Facebook AI) and Pablo Mesejo (University of Granada) http://proceedings.mlr.press/v97/vladimirova19a.html

January 6th, 2022, Gersende Fort (CNRS)
Title: FedEM : Expectation Maximization algorithm for Federated Learning
Abstract: The Expectation Maximization (EM) algorithm is an iterative procedure designed to optimize positive functions defined by an integral. EM is of key importance for inference in latent variable models including mixture of regressors and experts, missing observations. To our best knowledge, EM designed for federated learning (FL) is an open question. The goal of this talk is to propose new EM methods designed for FL. First, we will describe the usual EM algorithm in the so-called framework 'complete data model from the exponential family' and how it is related to the Stochastic Approximation methods: EM iterations consist in updating conditional expectations of sufficient statistics; the limiting points of EM are the roots of a function which, in the FL setting, has a specific expression which calls for the use of Stochastic Approximation algorithms. Then, we will introduce and comment two new EM algorithms for FL: the steps made by the local agents, the ones by the central server, how the communication cost agents-server can be controlled, how the possible partial participation of agents is managed. The second EM algorithm includes a variance reduction technique. Finite-time horizon complexity bounds will be commented: what they imply about the role of design parameters on the computational cost of the E-steps, the computational cost of the M-steps, and the communication cost. Numerical examples will illustrate our findings. This is a joint work with Aymeric Dieuleveut (CMAP, Ecole Polytechnique), Eric Moulines (CMAP, Ecole Polytechnique) and Geneviève Robin (LAMME, CNRS); published in the Proceedings of NeurIPS 2021.

October 7th, 2021, Bruno Gaujal (INRIA)
Title: Discrete Mean Field Games: Existence of Equilibria and Convergence (joint for with Nicolas Gast and Josu Doncel)
Abstract: Mean field games have been introduced by Lasry and Lions as well as Huang, Caines and Malhame in 2006 to model interactions between a large number of strategic agents (players) and have had a large success ever since. Most of the literature concerns continuous state spaces and describes a mean field game as a coupling between a Hamilton-Jacobi-Bellman equation with a Fokker- Planck equation. Here, we are interested in presenting mean field games with a finite number of states and finite number of actions per player. In this case, the analog of the Hamilton-Jacobi- Bellman equation is a Bellman equation and the discrete version of the Fokker- Planck equation is a Kolmogorov equation. The models we present in this seminar, both in the synchronous and asynchronous cases includes non-linear dynamics with explicit interactions between players. This covers several natural phenomena such as information/infection propagation or resource congestion. We show that the only requirement needed to guarantee the existence of a Mean Field Equilibrium in mixed strategies is that the cost is continuous with respect to the population distribution (convexity is not needed). This result nicely mimics the conditions for existence of a Nash equilibrium in the simpler case of static population games. The second part of the seminar concerns convergence of finite games to mean field limits.We show that a mean field equilibrium is always an ε-approximation of an equilibrium of a corresponding game with a finite number N of players, where ε goes to 0 when N goes to infinity. This is the discrete version of similar results in continuous games. However, we show also that not all equilibria for the finite version converge to a Nash equilibrium of the mean field limit of the game. We provide several counter- examples to illustrate this fact. They are all based on the following idea: The “tit for tat” principle allows one to define many equilibria in repeated games with N players. However, when the number of players is infinite, the deviation of a single player is not visible by the population that cannot punish him in retaliation for her deviation. This implies that while the games with N players may have many equilibria, as stated by the folk theorem, this may not be the case for the limit game. This fact is well-known for large repeated games (the Anti-folk Theorem). However, up to our knowledge, these results had not yet been investigated in mean field games.
Slides

June 24th, 2021, Luc Pronzato (CNRS - I3S)
Title: Maximum Mean Discrepancy, Bayesian integration and kernel herding for space-filling design
Abstract: A standard objective in computer experiments is to predict/interpolate the behaviour of an unknown function f on a compact domain from a few evaluations inside the domain. When little is known about the function, space-filling design is advisable: typically, points of evaluation spread out across the available space are obtained by minimizing a geometrical (for instance, minimax-distance) or a discrepancy criterion measuring distance to uniformity. We focus our attention to sequential constructions where design points are added one at a time. The presentation is based on the survey, built on several recent results that show how energy functionals can be used to measure distance to uniformity. We investigate connections between design for integration of f with respect to a measure µ (quadrature design), construction of the (continuous) BLUE for the location model, and minimization of energy (kernel discrepancy) for signed measures. Integrally strictly positive definite kernels define strictly convex energy functionals, with an equivalence between the notions of potential and directional derivative showing the strong relation between discrepancy minimization and more traditional design of optimal experiments. Kernel herding algorithms, which are special instances of vertex-direction methods used in optimal design, can be applied to the construction of point sequences with suitable space-filling properties. Several illustrative examples are presented.
Slides, Video

April 29th, 2021, Julien Tierny (CNRS - Sorbonne Universite)
Title: An overview of topological methods in data analysis and visualization
Abstract: Topological methods in data analysis and visualization focus on discovering intrinsic structures hidden in data. Based on established tools (such as Morse theory or persistent homology), these methods enable the robust extraction of the main features of a dataset into stable, concise and multi-scale descriptors that facilitate data analysis and visualization. In this talk, I will give an intuitive overview of the main tools used in topological data analysis and visualization (persistence diagrams, Reeb graphs, Morse-Smale complexes, etc.) with applications to concrete use cases in computational fluid dynamics, medical imaging, or quantum chemistry. I will conclude the talk by mentioning some of my contributions to this topic and I will discuss ongoing research efforts. The talk will be illustrated with results produced with the ''Topology ToolKit'' (TTK), an open-source library (BSD license) that we develop with collaborators to showcase our research. Tutorials for reproducing these experiments are available on the TTK website: https://topology-tool-kit.github.io/
Slides, Video

February 25th, 2021, Arthur Gretton (Gatsby Computational Neuroscience Unit - UCL)
Title: Generalized Energy-Based Models
Abstract: I will introduce Generalized Energy Based Models (GEBM) for generative modelling. These models combine two trained components: a base distribution (generally an implicit model, as in a Generative Adversarial Network), which can learn the support of data with low intrinsic dimension in a high dimensional space; and an energy function, to refine the probability mass on the learned support. Both the energy function and base jointly constitute the final model, unlike GANs, which retain only the base distribution (the “generator”). In particular, while the energy function is analogous to the GAN critic function, it is not discarded after training. GEBMs are trained by alternating between learning the energy and the base, much like a GAN. Both training stages are well-defined: the energy is learned by maximising a generalized likelihood, and the resulting energy-based loss provides informative gradients for learning the base. Samples from the posterior on the latent space of the trained model can be obtained via MCMC, thus finding regions in this space that produce better quality samples. Empirically, the GEBM samples on image-generation tasks are of better quality than those from the learned generator alone, indicating that all else being equal, the GEBM will outperform a GAN of the same complexity. GEBMs also return state-of-the-art performance on density modelling tasks, and when using base measures with an explicit form. The talk is based on the ICLR 2021 paper https://arxiv.org/abs/2003.05033
Slides, Video

Novembre 26th, 2020, Jean-Michel Hupé (FRAMESPA, Université de Toulouse Jean Jaurès & CNRS)
Title: What responsibilities for scientists in the turmoil of the anthropocene?
Abstract: Climate change, the sixth extinction of species and the general disruption of the earth system do not concern a worrying future anymore: they are happening now. This new age of the anthropocene questions the responsibility of scientists in at least four ways. (1) Scientists of the IPCC have been raising the alarm for more than 30 years, yet CO2 emissions and ecological destructions have continued increasing. This failure of the alarm questions the neutral position of scientists and requests scientists to leave their ivory tower and find a new way to share their knowledge, as experimented by the Studies in Political Ecology groups. (2) The 2015 Paris Agreement legally binds France to reach carbon neutrality by 2050. Everybody and all activities are concerned, including research. The Labos 1point5 initiative enjoins all researchers to measure their professional carbon footprint and then reduce it. (3) Science is urged to promise technological breakthroughs to meet a double, unprecedented and urgent challenge: to modify the socio-economic system in order to do without fossil fuels (which constitute 80% of energy consumption and is the engine of economic growth), while adapting to a degraded environment. Shouldn’t these two challenges be the priority for all scientists? (4) However, scientific research drives innovations that entail structural changes in our society, hence bearing some responsibility in the ongoing ecological destructions. So should science continue to promote economic growth through innovation, in particular in the energy-hungry field of digital applications and artificial intelligence, as yet promoted by European research agencies?
Slides

November 5th, 2020, Jean-Bernard Lasserre (DR CNRS / LAAS)
Title: Moment-SOS hierarchies in and outside optimization
Abstract: We introduce the Moment-SOs hierarchy intially developped for polynomial optimization, and briefly describe some of its many application in various area of engineering, including super-resolution in signal processing, sparse polynomial interpolation, optimal design in Statistics, volume computation in computational geometry, control & optimal control, Non-linear PDEs. In a second part we introduce an alternative (different) SOS-hierarchy which provides a sequence of upper bounds converging to the global minimum of a polynomial on simple sets like box, ellipsoids, simplex, discrete hypercubes et their affine transformation.
Slides, A drawing

July 2nd, 2020, Cédric Févotte (DR CNRS / IRIT)
Title: Robust nonnegative matrix factorisation with the beta-divergence and applications in imaging
Abstract: Data is often available in matrix form, in which columns are samples, and processing of such data often entails finding an approximate factorisation of the matrix into two factors. The first factor (the “dictionary”) yields recurring patterns characteristic of the data. The second factor (“the activation matrix”) describes in which proportions each data sample is made of these patterns. Nonnegative matrix factorisation (NMF) is a popular technique for analysing data with nonnegative values, with applications in many areas such as in text information retrieval, user recommendation, audio signal processing, and hyperspectral imaging. In a first part, I will give a short tutorial about NMF for data processing and introduce a general majorisation-minimisation framework for NMF with the beta-divergence, a continuous family of loss functions that takes the quadratic loss, KL divergence and Itakura-Saito divergence as special cases. Secondly, I will present applications for hyperspectral unmixing in remote sensing and factor analysis in dynamic PET, introducing robust variants of NMF that account for outliers, nonlinear phenomena or specific binding. References: C. Févotte, J. Idier. Algorithms for nonnegative matrix factorization with the beta-divergence. Neural computation, 2011. C. Févotte, N. Dobigeon. Nonlinear hyperspectral unmixing with robust nonnegative matrix factorization. IEEE Transactions on Image Processing, 2015. Y. C. Cavalcanti, T. Oberlin, N. Dobigeon, C. Févotte, S. Stute, M. J. Ribeiro, C. Tauber. Factor analysis of dynamic PET images: beyond Gaussian noise. IEEE Transactions on Medical Imaging, 2019.
Slides

June 11th, 2020, Laurent Massoulié (MSR-INRIA)
Title: From tree matching to graph alignment
Abstract: In this work we consider alignment of sparse graphs, for which we introduce the Neighborhood Tree Matching Algorithm (NTMA). For correlated Erdős-Rényi random graphs, we prove that the algorithm returns -- in polynomial time -- a positive fraction of correctly matched vertices, and a vanishing fraction of mismatches. This result holds with average degree of the graphs in O(1) and correlation parameter s bounded away from 1, conditions under which random graph alignment is particularly challenging. As a byproduct of the analysis we introduce a matching metric between trees and characterize it for several models of correlated random trees. These results may be of independent interest, yielding for instance efficient tests for determining whether two random trees are correlated or independent. Joint work with Luca Ganassali.
Slides

November 14th, 2019, Shiro Ikeda (The Institute of Statistical Mathematics, Tokyo)
Title: Data science for the EHT blackhole shadow imaging
Abstract: Last April, the EHT (Event Horizon Telescope) collaboration released the first image of the M87 black hole shadow. EHT is a huge VLBI (very long baseline interferometer) and an inverse problem must be solved in order to have an image. We have developed a new imaging method for EHT and it is one of the three methods which contributed to the released image. In this talk, I explain how the method has been developed and the final image was created.

June 6th, 2019, Eric Chassande-Mottin (CNRS/IN2P3 Astroparticule et Cosmologie APC, Univ Paris Diderot)
Title: Two black holes in the haystack
Abstract: On April 1, 2019, the gravitational wave detectors advanced LIGO and advanced Virgo started their third observation campaign. They have since detected about ten candidate signals associated with the merger of pairs of compact astrophysical objects (black holes and neutron stars). The analysis that will eventually confirm their exact nature and astrophysical origin is still in progress. Fruit of an experimental feat rewarded by the 2017 Nobel Prize in Physics, this result also relies on a range of signal analysis techniques and procedures that we will review.

March 28th, 2019, André Ferrari (Lagrange Laboratory (Nice))
Title: Image reconstruction for radio astronomy
Abstract: After a general introduction to radio astronomy, including the future SKA (Square Kilometer Array), the presentation will focus on image reconstruction in radio-interferometry with a particular attention to the derivation of the measurement equation. We will then present the MUFFIN (MUlti Frequency image reconstruction For radio INterferometry) algorithm which aims to reconstruct the sky at multiple wavelengths. MUFFIN allows a parallel computation of the most demanding computational steps and an automatic tuning of the regularization strengths. Simulation results based on realistic data will be presented.

January 10th, 2019, Pablo Jensen (ENS Lyon / IXXI)
Title: The unexpected link between neural nets and liberalism
Abstract: Sixty years ago, Arthur Rosenblatt, a psychologist working for the army invented the perceptron, the first neural network capable of learning. Unexpectedly, Rosenblatt cites, as a major source of inspiration, an economist: Friedrich Hayek. He is well-known for his 1974 Nobel prize… and by his ultra-liberal stances, justifying the Pinochet coup in a Chilean newspaper: «Personally, I prefer a liberal dictator to a democratic government that lacks liberalism». This talk presents ongoing work on the link between Hayek’s ideology and neural networks.