Past seminars


You can find below a list of past seminars, with slides when available.

March 7th, 2024, Isabelle Bloch (Sorbonne Université (Paris))
Title: Fuzzy Sets: A Key Towards Hybrid Explainable Artificial Intelligence for Image Understanding
Abstract: In this talk, we will discus the role of fuzzy sets theory in the context of explainable artificial intelligence. We advocate that combining several frameworks in artificial intelligence, including fuzzy sets theory, adopting a hybrid point of view both for knowledge and data representation and for reasoning, offers opportunities towards explainability. This idea is instantiated on the example of image understanding, expressed as a spatial reasoning problem.

February 1st, 2024, Robin Ryder (Université Paris-Dauphine)
Title: Bayesian methods for inferring the history of languages
Abstract: Languages change through time in a manner comparable to biological evolution. Models have been developed for many aspects of human languages, including vocabulary, syntax and phonology. The complexity of these models, as well as the nature of the questions of interest, make the Bayesian framework quite natural in this setting, which explains why much of the research in Statistics applied to Historical Linguistics uses Bayesian methods. I shall present an overview of various models, starting with Morris Swadesh's failed attempts at glottochronology in the 1950s, then looking at some models developed in the last two decades. I shall go into more detail for a model of so-called ``core'' lexical data by a stochastic process on a phylogenetic tree, with an initial focus on the Sino-Tibetan family of languages and on the issue of dating the most recent common ancestor to these languages. This will allow me to discuss issues of model robustness and validation. I shall conclude with some very recent work about joint estimation of lexical and phonological changes through a model of random discrete matrices, and its application to the history of sign languages.

January 11th, 2024, Bruno Loureiro (ENS (Paris))
Title: A statistical physics perspective on the theory of machine learning: recent progress for shallow neural networks
Abstract: The past decade has witnessed a surge in the development and adoption of machine learning algorithms to solve day-a-day computational tasks. Yet, a solid theoretical understanding of even the most basic tools used in practice is still lacking, as traditional statistical learning methods are unfit to deal with the modern regime in which the number of model parameters are of the same order as the quantity of data - a problem known as the curse of dimensionality. Curiously, this is precisely the regime studied by Physicists since the mid 19th century in the context of interacting many- particle systems. This connection, which was first established in the seminal work of Elisabeth Gardner and Bernard Derrida in the 80s, is the basis of a long and fruitful marriage between these two fields. In this talk I will motivate and review the connections between Statistical Physics and problems from Machine Learning, in particular concerning the theory of shallow neural networks.

December 7th, 2023, Julie Digne (CNRS, LIRIS (Lyon))
Title: An overview of general trends in Geometry Processing
Abstract: In this talk I will give an introduction to the field of Geometry Processing: how to process a 3D shape from early tasks: denoising, super-resolution or upsampling and surface reconstruction to high level tasks such as shape recognition or shape editing. From traditional axiomatic methods to more recent deep learning developments including implicit neural representations, the field has undergone some radical changes over the recent years. While deep learning for regular euclidean data has led to a huge leap in performance for image analysis and image generation, the progress is not as impressive for shape analysis or shape generation. This is largely due to the challenges posed by non-euclidean data, which require special dedicated architectures, often not as efficient and widely spread as image ones, and this talk will present some techniques addressing these challenges.

November 9th, 2023, Yohann de Castro (Institut Camille Jordan, Ecole Centrale de Lyon)
Title: A Mathematical Journey of Regularization on Measures
Abstract: In this presentation, we will take a tour of the path of regularization on measures. We will use an example to revisit recent developments in this tool at the intersection of statistics, learning, and optimization. Along the way, we will encounter various species such as the dual of a convex program with its subgradient, empirical process concentration with its `golfing scheme', kernel functions with their Hilbertian structures, stochastic gradient descent with particles, and beautiful weather days with an optimal transport distance called partial displacement. The landscapes traversed will lead us to discuss applications in unsupervised learning (deep or not), super-resolution, tensor processing or even quadrature. We will avoid technical details on sensitive topics, but any questions about these aspects are welcome. No technical equipment is required; a regular practice of indulgence is sufficient.

November 16th, 2022, Agnès Desolneux (CNRS - Centre Borelli)
Title: Maximum Entropy Distributions For Image Synthesis Under Statistical Constraints
Abstract: In this seminar, I will talk about exponential distributions that are maximum entropy distributions under statistical constraints, and more precisely about the way they are defined and how to sample them. I will present several applications in image synthesis.

November 17th, 2022, Shiro Ikeda (ISM, Tokyo)
Title: The Images of Blackhole shadows
Abstract: The Event Horizon Telescope collaboration (EHTC) has more than 300 members from different backgrounds and countries. In April 2019, we released images of two black hole shadows. The first one was the black hole at the centre of the M87 galaxy (April 2019), and the other was the one at the centre of our Milky Way galaxy (May 2022). The EHT is a huge Very Long Baseline Interferometer (VLBI), which is different from optical telescopes in that a lot of computation is required to obtain a single image. I have been involved in the project as a data scientist and collaborated with EHTC members to develop a new imaging method. In this talk, I will explain how the new imaging technique has been developed and the final images were created.

October 27th, 2022, Julien Flamant (CNRS, CRAN (Nancy))
Title: Polarimetric phase retrieval: uniqueness and algorithms
Abstract: Phase retrieval problems are ubiquitous in imaging applications, such as crystallography, coherent diffraction imaging or ptychography, among others. To enable the systematic use of light polarization information in such problems, we propose a novel phase retrieval model, called polarimetric phase retrieval, that leverages the physics of polarization measurement in optics. In this talk, I will first detail the uniqueness properties of this new model by unraveling equivalencies with a peculiar polynomial factorization problem. The latter will turn out to play a critical role, both regarding uniqueness of the problem and the design of algebraic reconstruction methods based on approximate greatest common divisor computations. I will eventually highlight a computationally efficient reconstruction strategy for polarimetric phase retrieval that combines algebraic with more standard iterative approaches. Several numerical experiments on synthetic data will be presented.

October 27th, 2022, Antoine Roueff (Ecole Centrale de Marseille)
Title: Statistical developments dedicated to the Inter Stellar Medium Analysis
Abstract: Joint work with J. Pety, M. Gerin, F. Le Petit, E. Bron, P. Chainais, J. Chanussot and the Orion B consortium. The Orion B consortium (http://iram.fr/~pety/ORION-B/) investigates the early stage of star formation in giant molecular clouds where the density is between 102 and 105 cm-3 and the kinetic temperature is between 10 and 100 Kelvins. The underlying physical and chemical state of such medium cannot be fully reproduced in laboratories. Therefore, astrophysicists build theories, develop sophisticated chemical codes and a diversity of radiative transfer models, and test them on molecular emissions from “close” Giant Molecular Clouds (located at 1000 light years from us). Recently, data analysts joined the Orion B consortium to define statistical algorithms that equate astrophysical knowledge. Their mission consists in structuring the problems encountered in statistical learning frameworks. This means selecting the adequate model and a priori knowledge to reach the best performance in estimation, detection, classification, but also to document the diagnostic capabilities of molecular observations based on information theory. The seminar will focus on the physical (and simplified) description of the problem and will present a glimpse of the different ongoing developments. Some results obtained on data observed close to the Horsehead Nebula will illustrate the encountered challenges. These data are spectacular in terms of the amount of spatial (10⁶ pixels) and spectral (200,000 channels) information covering the millimeter spectrum between 86 and 116 GHz. To simplify, one could say that the purpose of the consortium is actually to extract as much information as possible from this huge dataset.

May 19th, 2022, Pierre Weiss (CNRS)
Title: Blind deconvolution and deblurring
Abstract: An essential and largely open problem is that of blind deconvolution or super-resolution. When a measurement system acquires a signal or an image, these are filtered by the impulse response of the system and then sampled. Although filtering is unavoidable, the loss of resolution induced can have disastrous consequences. In microscopy, for example, there have been two Nobel prizes in the last decade for systems that exceed the diffraction limit. The question raised in this presentation is: can we reconstitute part of the lost information when the response of the optical system is not perfectly known? We will present some theoretical and practical aspects of this problem in the case of imaging. At the end of the presentation, we will show that this problem can be tackled much more efficiently than in the past with learning techniques. We will end with some open questions.

May 5th, 2022, Nicolas Courty (UBS, IRISA)
Title: Optimal Transport for Graph processing
Abstract: In this talk I will discuss how a variant of the classical optimal transport problem, known as the Gromov-Wasserstein distance, can help in designing learning tasks over graphs, and allow to transpose classical signal processing or data analysis tools such as dictionary learning or online change detection, for learning over those types of structured objects. Also, some theoretical and practical aspects will be discussed.
Slides

January 27th, 2022, Julyan Arbel (INRIA)
Title: Understanding Priors in Bayesian Neural Networks at the Unit Level
Abstract: After a short introduction to Bayesian deep learning (that is to say, to Bayesian neural networks), I will present a recent work focused on Bayesian neural networks with Gaussian weight priors and a class of ReLU-like nonlinearities. Such neural networks are well-known to induce an L2, or weight-decay, regularization. Our results characterize a more intricate regularization effect at the level of the unit activations. Our main result establishes that the induced prior distribution on the units before and after activation becomes increasingly heavy-tailed with the depth of the layer. We show that first-layer units are Gaussian, second-layer units are sub-exponential, and units in deeper layers are characterized by so-called sub-Weibull distributions. This provides new theoretical insight on Bayesian neural networks, which we corroborate with experimental simulation results. Joint work with Mariia Vladimirova (Inria), Jakob Verbeek (Facebook AI) and Pablo Mesejo (University of Granada) http://proceedings.mlr.press/v97/vladimirova19a.html

January 6th, 2022, Gersende Fort (CNRS)
Title: FedEM : Expectation Maximization algorithm for Federated Learning
Abstract: The Expectation Maximization (EM) algorithm is an iterative procedure designed to optimize positive functions defined by an integral. EM is of key importance for inference in latent variable models including mixture of regressors and experts, missing observations. To our best knowledge, EM designed for federated learning (FL) is an open question. The goal of this talk is to propose new EM methods designed for FL. First, we will describe the usual EM algorithm in the so-called framework 'complete data model from the exponential family' and how it is related to the Stochastic Approximation methods: EM iterations consist in updating conditional expectations of sufficient statistics; the limiting points of EM are the roots of a function which, in the FL setting, has a specific expression which calls for the use of Stochastic Approximation algorithms. Then, we will introduce and comment two new EM algorithms for FL: the steps made by the local agents, the ones by the central server, how the communication cost agents-server can be controlled, how the possible partial participation of agents is managed. The second EM algorithm includes a variance reduction technique. Finite-time horizon complexity bounds will be commented: what they imply about the role of design parameters on the computational cost of the E-steps, the computational cost of the M-steps, and the communication cost. Numerical examples will illustrate our findings. This is a joint work with Aymeric Dieuleveut (CMAP, Ecole Polytechnique), Eric Moulines (CMAP, Ecole Polytechnique) and Geneviève Robin (LAMME, CNRS); published in the Proceedings of NeurIPS 2021.

October 7th, 2021, Bruno Gaujal (INRIA)
Title: Discrete Mean Field Games: Existence of Equilibria and Convergence (joint for with Nicolas Gast and Josu Doncel)
Abstract: Mean field games have been introduced by Lasry and Lions as well as Huang, Caines and Malhame in 2006 to model interactions between a large number of strategic agents (players) and have had a large success ever since. Most of the literature concerns continuous state spaces and describes a mean field game as a coupling between a Hamilton-Jacobi-Bellman equation with a Fokker- Planck equation. Here, we are interested in presenting mean field games with a finite number of states and finite number of actions per player. In this case, the analog of the Hamilton-Jacobi- Bellman equation is a Bellman equation and the discrete version of the Fokker- Planck equation is a Kolmogorov equation. The models we present in this seminar, both in the synchronous and asynchronous cases includes non-linear dynamics with explicit interactions between players. This covers several natural phenomena such as information/infection propagation or resource congestion. We show that the only requirement needed to guarantee the existence of a Mean Field Equilibrium in mixed strategies is that the cost is continuous with respect to the population distribution (convexity is not needed). This result nicely mimics the conditions for existence of a Nash equilibrium in the simpler case of static population games. The second part of the seminar concerns convergence of finite games to mean field limits.We show that a mean field equilibrium is always an ε-approximation of an equilibrium of a corresponding game with a finite number N of players, where ε goes to 0 when N goes to infinity. This is the discrete version of similar results in continuous games. However, we show also that not all equilibria for the finite version converge to a Nash equilibrium of the mean field limit of the game. We provide several counter- examples to illustrate this fact. They are all based on the following idea: The “tit for tat” principle allows one to define many equilibria in repeated games with N players. However, when the number of players is infinite, the deviation of a single player is not visible by the population that cannot punish him in retaliation for her deviation. This implies that while the games with N players may have many equilibria, as stated by the folk theorem, this may not be the case for the limit game. This fact is well-known for large repeated games (the Anti-folk Theorem). However, up to our knowledge, these results had not yet been investigated in mean field games.
Slides

June 24th, 2021, Luc Pronzato (CNRS - I3S)
Title: Maximum Mean Discrepancy, Bayesian integration and kernel herding for space-filling design
Abstract: A standard objective in computer experiments is to predict/interpolate the behaviour of an unknown function f on a compact domain from a few evaluations inside the domain. When little is known about the function, space-filling design is advisable: typically, points of evaluation spread out across the available space are obtained by minimizing a geometrical (for instance, minimax-distance) or a discrepancy criterion measuring distance to uniformity. We focus our attention to sequential constructions where design points are added one at a time. The presentation is based on the survey, built on several recent results that show how energy functionals can be used to measure distance to uniformity. We investigate connections between design for integration of f with respect to a measure µ (quadrature design), construction of the (continuous) BLUE for the location model, and minimization of energy (kernel discrepancy) for signed measures. Integrally strictly positive definite kernels define strictly convex energy functionals, with an equivalence between the notions of potential and directional derivative showing the strong relation between discrepancy minimization and more traditional design of optimal experiments. Kernel herding algorithms, which are special instances of vertex-direction methods used in optimal design, can be applied to the construction of point sequences with suitable space-filling properties. Several illustrative examples are presented.
Slides, Video

April 29th, 2021, Julien Tierny (CNRS - Sorbonne Universite)
Title: An overview of topological methods in data analysis and visualization
Abstract: Topological methods in data analysis and visualization focus on discovering intrinsic structures hidden in data. Based on established tools (such as Morse theory or persistent homology), these methods enable the robust extraction of the main features of a dataset into stable, concise and multi-scale descriptors that facilitate data analysis and visualization. In this talk, I will give an intuitive overview of the main tools used in topological data analysis and visualization (persistence diagrams, Reeb graphs, Morse-Smale complexes, etc.) with applications to concrete use cases in computational fluid dynamics, medical imaging, or quantum chemistry. I will conclude the talk by mentioning some of my contributions to this topic and I will discuss ongoing research efforts. The talk will be illustrated with results produced with the ''Topology ToolKit'' (TTK), an open-source library (BSD license) that we develop with collaborators to showcase our research. Tutorials for reproducing these experiments are available on the TTK website: https://topology-tool-kit.github.io/
Slides, Video

February 25th, 2021, Arthur Gretton (Gatsby Computational Neuroscience Unit - UCL)
Title: Generalized Energy-Based Models
Abstract: I will introduce Generalized Energy Based Models (GEBM) for generative modelling. These models combine two trained components: a base distribution (generally an implicit model, as in a Generative Adversarial Network), which can learn the support of data with low intrinsic dimension in a high dimensional space; and an energy function, to refine the probability mass on the learned support. Both the energy function and base jointly constitute the final model, unlike GANs, which retain only the base distribution (the “generator”). In particular, while the energy function is analogous to the GAN critic function, it is not discarded after training. GEBMs are trained by alternating between learning the energy and the base, much like a GAN. Both training stages are well-defined: the energy is learned by maximising a generalized likelihood, and the resulting energy-based loss provides informative gradients for learning the base. Samples from the posterior on the latent space of the trained model can be obtained via MCMC, thus finding regions in this space that produce better quality samples. Empirically, the GEBM samples on image-generation tasks are of better quality than those from the learned generator alone, indicating that all else being equal, the GEBM will outperform a GAN of the same complexity. GEBMs also return state-of-the-art performance on density modelling tasks, and when using base measures with an explicit form. The talk is based on the ICLR 2021 paper https://arxiv.org/abs/2003.05033
Slides, Video

Novembre 26th, 2020, Jean-Michel Hupé (FRAMESPA, Université de Toulouse Jean Jaurès & CNRS)
Title: What responsibilities for scientists in the turmoil of the anthropocene?
Abstract: Climate change, the sixth extinction of species and the general disruption of the earth system do not concern a worrying future anymore: they are happening now. This new age of the anthropocene questions the responsibility of scientists in at least four ways. (1) Scientists of the IPCC have been raising the alarm for more than 30 years, yet CO2 emissions and ecological destructions have continued increasing. This failure of the alarm questions the neutral position of scientists and requests scientists to leave their ivory tower and find a new way to share their knowledge, as experimented by the Studies in Political Ecology groups. (2) The 2015 Paris Agreement legally binds France to reach carbon neutrality by 2050. Everybody and all activities are concerned, including research. The Labos 1point5 initiative enjoins all researchers to measure their professional carbon footprint and then reduce it. (3) Science is urged to promise technological breakthroughs to meet a double, unprecedented and urgent challenge: to modify the socio-economic system in order to do without fossil fuels (which constitute 80% of energy consumption and is the engine of economic growth), while adapting to a degraded environment. Shouldn’t these two challenges be the priority for all scientists? (4) However, scientific research drives innovations that entail structural changes in our society, hence bearing some responsibility in the ongoing ecological destructions. So should science continue to promote economic growth through innovation, in particular in the energy-hungry field of digital applications and artificial intelligence, as yet promoted by European research agencies?
Slides

November 5th, 2020, Jean-Bernard Lasserre (DR CNRS / LAAS)
Title: Moment-SOS hierarchies in and outside optimization
Abstract: We introduce the Moment-SOs hierarchy intially developped for polynomial optimization, and briefly describe some of its many application in various area of engineering, including super-resolution in signal processing, sparse polynomial interpolation, optimal design in Statistics, volume computation in computational geometry, control & optimal control, Non-linear PDEs. In a second part we introduce an alternative (different) SOS-hierarchy which provides a sequence of upper bounds converging to the global minimum of a polynomial on simple sets like box, ellipsoids, simplex, discrete hypercubes et their affine transformation.
Slides, A drawing

July 2nd, 2020, Cédric Févotte (DR CNRS / IRIT)
Title: Robust nonnegative matrix factorisation with the beta-divergence and applications in imaging
Abstract: Data is often available in matrix form, in which columns are samples, and processing of such data often entails finding an approximate factorisation of the matrix into two factors. The first factor (the “dictionary”) yields recurring patterns characteristic of the data. The second factor (“the activation matrix”) describes in which proportions each data sample is made of these patterns. Nonnegative matrix factorisation (NMF) is a popular technique for analysing data with nonnegative values, with applications in many areas such as in text information retrieval, user recommendation, audio signal processing, and hyperspectral imaging. In a first part, I will give a short tutorial about NMF for data processing and introduce a general majorisation-minimisation framework for NMF with the beta-divergence, a continuous family of loss functions that takes the quadratic loss, KL divergence and Itakura-Saito divergence as special cases. Secondly, I will present applications for hyperspectral unmixing in remote sensing and factor analysis in dynamic PET, introducing robust variants of NMF that account for outliers, nonlinear phenomena or specific binding. References: C. Févotte, J. Idier. Algorithms for nonnegative matrix factorization with the beta-divergence. Neural computation, 2011. C. Févotte, N. Dobigeon. Nonlinear hyperspectral unmixing with robust nonnegative matrix factorization. IEEE Transactions on Image Processing, 2015. Y. C. Cavalcanti, T. Oberlin, N. Dobigeon, C. Févotte, S. Stute, M. J. Ribeiro, C. Tauber. Factor analysis of dynamic PET images: beyond Gaussian noise. IEEE Transactions on Medical Imaging, 2019.
Slides

June 11th, 2020, Laurent Massoulié (MSR-INRIA)
Title: From tree matching to graph alignment
Abstract: In this work we consider alignment of sparse graphs, for which we introduce the Neighborhood Tree Matching Algorithm (NTMA). For correlated Erdős-Rényi random graphs, we prove that the algorithm returns -- in polynomial time -- a positive fraction of correctly matched vertices, and a vanishing fraction of mismatches. This result holds with average degree of the graphs in O(1) and correlation parameter s bounded away from 1, conditions under which random graph alignment is particularly challenging. As a byproduct of the analysis we introduce a matching metric between trees and characterize it for several models of correlated random trees. These results may be of independent interest, yielding for instance efficient tests for determining whether two random trees are correlated or independent. Joint work with Luca Ganassali.
Slides

November 14th, 2019, Shiro Ikeda (The Institute of Statistical Mathematics, Tokyo)
Title: Data science for the EHT blackhole shadow imaging
Abstract: Last April, the EHT (Event Horizon Telescope) collaboration released the first image of the M87 black hole shadow. EHT is a huge VLBI (very long baseline interferometer) and an inverse problem must be solved in order to have an image. We have developed a new imaging method for EHT and it is one of the three methods which contributed to the released image. In this talk, I explain how the method has been developed and the final image was created.

June 6th, 2019, Eric Chassande-Mottin (CNRS/IN2P3 Astroparticule et Cosmologie APC, Univ Paris Diderot)
Title: Two black holes in the haystack
Abstract: On April 1, 2019, the gravitational wave detectors advanced LIGO and advanced Virgo started their third observation campaign. They have since detected about ten candidate signals associated with the merger of pairs of compact astrophysical objects (black holes and neutron stars). The analysis that will eventually confirm their exact nature and astrophysical origin is still in progress. Fruit of an experimental feat rewarded by the 2017 Nobel Prize in Physics, this result also relies on a range of signal analysis techniques and procedures that we will review.

March 28th, 2019, André Ferrari (Lagrange Laboratory (Nice))
Title: Image reconstruction for radio astronomy
Abstract: After a general introduction to radio astronomy, including the future SKA (Square Kilometer Array), the presentation will focus on image reconstruction in radio-interferometry with a particular attention to the derivation of the measurement equation. We will then present the MUFFIN (MUlti Frequency image reconstruction For radio INterferometry) algorithm which aims to reconstruct the sky at multiple wavelengths. MUFFIN allows a parallel computation of the most demanding computational steps and an automatic tuning of the regularization strengths. Simulation results based on realistic data will be presented.

January 10th, 2019, Pablo Jensen (ENS Lyon / IXXI)
Title: The unexpected link between neural nets and liberalism
Abstract: Sixty years ago, Arthur Rosenblatt, a psychologist working for the army invented the perceptron, the first neural network capable of learning. Unexpectedly, Rosenblatt cites, as a major source of inspiration, an economist: Friedrich Hayek. He is well-known for his 1974 Nobel prize… and by his ultra-liberal stances, justifying the Pinochet coup in a Chilean newspaper: «Personally, I prefer a liberal dictator to a democratic government that lacks liberalism». This talk presents ongoing work on the link between Hayek’s ideology and neural networks.