Past seminars


You can find below a list of past seminars, with slides when available.

June 24th, 2021, Luc Pronzato (CNRS - I3S)
Title: Maximum Mean Discrepancy, Bayesian integration and kernel herding for space-filling design
Abstract: A standard objective in computer experiments is to predict/interpolate the behaviour of an unknown function f on a compact domain from a few evaluations inside the domain. When little is known about the function, space-filling design is advisable: typically, points of evaluation spread out across the available space are obtained by minimizing a geometrical (for instance, minimax-distance) or a discrepancy criterion measuring distance to uniformity. We focus our attention to sequential constructions where design points are added one at a time. The presentation is based on the survey, built on several recent results that show how energy functionals can be used to measure distance to uniformity. We investigate connections between design for integration of f with respect to a measure µ (quadrature design), construction of the (continuous) BLUE for the location model, and minimization of energy (kernel discrepancy) for signed measures. Integrally strictly positive definite kernels define strictly convex energy functionals, with an equivalence between the notions of potential and directional derivative showing the strong relation between discrepancy minimization and more traditional design of optimal experiments. Kernel herding algorithms, which are special instances of vertex-direction methods used in optimal design, can be applied to the construction of point sequences with suitable space-filling properties. Several illustrative examples are presented.
Slides, Video

April 29th, 2021, Julien Tierny (CNRS - Sorbonne Universite)
Title: An overview of topological methods in data analysis and visualization
Abstract: Topological methods in data analysis and visualization focus on discovering intrinsic structures hidden in data. Based on established tools (such as Morse theory or persistent homology), these methods enable the robust extraction of the main features of a dataset into stable, concise and multi-scale descriptors that facilitate data analysis and visualization. In this talk, I will give an intuitive overview of the main tools used in topological data analysis and visualization (persistence diagrams, Reeb graphs, Morse-Smale complexes, etc.) with applications to concrete use cases in computational fluid dynamics, medical imaging, or quantum chemistry. I will conclude the talk by mentioning some of my contributions to this topic and I will discuss ongoing research efforts. The talk will be illustrated with results produced with the ''Topology ToolKit'' (TTK), an open-source library (BSD license) that we develop with collaborators to showcase our research. Tutorials for reproducing these experiments are available on the TTK website: https://topology-tool-kit.github.io/
Slides, Video

February 25th, 2021, Arthur Gretton (Gatsby Computational Neuroscience Unit - UCL)
Title: Generalized Energy-Based Models
Abstract: I will introduce Generalized Energy Based Models (GEBM) for generative modelling. These models combine two trained components: a base distribution (generally an implicit model, as in a Generative Adversarial Network), which can learn the support of data with low intrinsic dimension in a high dimensional space; and an energy function, to refine the probability mass on the learned support. Both the energy function and base jointly constitute the final model, unlike GANs, which retain only the base distribution (the “generator”). In particular, while the energy function is analogous to the GAN critic function, it is not discarded after training. GEBMs are trained by alternating between learning the energy and the base, much like a GAN. Both training stages are well-defined: the energy is learned by maximising a generalized likelihood, and the resulting energy-based loss provides informative gradients for learning the base. Samples from the posterior on the latent space of the trained model can be obtained via MCMC, thus finding regions in this space that produce better quality samples. Empirically, the GEBM samples on image-generation tasks are of better quality than those from the learned generator alone, indicating that all else being equal, the GEBM will outperform a GAN of the same complexity. GEBMs also return state-of-the-art performance on density modelling tasks, and when using base measures with an explicit form. The talk is based on the ICLR 2021 paper https://arxiv.org/abs/2003.05033
Slides, Video

Novembre 26th, 2020, Jean-Michel Hupé (FRAMESPA, Université de Toulouse Jean Jaurès & CNRS)
Title: What responsibilities for scientists in the turmoil of the anthropocene?
Abstract: Climate change, the sixth extinction of species and the general disruption of the earth system do not concern a worrying future anymore: they are happening now. This new age of the anthropocene questions the responsibility of scientists in at least four ways. (1) Scientists of the IPCC have been raising the alarm for more than 30 years, yet CO2 emissions and ecological destructions have continued increasing. This failure of the alarm questions the neutral position of scientists and requests scientists to leave their ivory tower and find a new way to share their knowledge, as experimented by the Studies in Political Ecology groups. (2) The 2015 Paris Agreement legally binds France to reach carbon neutrality by 2050. Everybody and all activities are concerned, including research. The Labos 1point5 initiative enjoins all researchers to measure their professional carbon footprint and then reduce it. (3) Science is urged to promise technological breakthroughs to meet a double, unprecedented and urgent challenge: to modify the socio-economic system in order to do without fossil fuels (which constitute 80% of energy consumption and is the engine of economic growth), while adapting to a degraded environment. Shouldn’t these two challenges be the priority for all scientists? (4) However, scientific research drives innovations that entail structural changes in our society, hence bearing some responsibility in the ongoing ecological destructions. So should science continue to promote economic growth through innovation, in particular in the energy-hungry field of digital applications and artificial intelligence, as yet promoted by European research agencies?
Slides

November 5th, 2020, Jean-Bernard Lasserre (DR CNRS / LAAS)
Title: Moment-SOS hierarchies in and outside optimization
Abstract: We introduce the Moment-SOs hierarchy intially developped for polynomial optimization, and briefly describe some of its many application in various area of engineering, including super-resolution in signal processing, sparse polynomial interpolation, optimal design in Statistics, volume computation in computational geometry, control & optimal control, Non-linear PDEs. In a second part we introduce an alternative (different) SOS-hierarchy which provides a sequence of upper bounds converging to the global minimum of a polynomial on simple sets like box, ellipsoids, simplex, discrete hypercubes et their affine transformation.
Slides, A drawing

July 2nd, 2020, Cédric Févotte (DR CNRS / IRIT)
Title: Robust nonnegative matrix factorisation with the beta-divergence and applications in imaging
Abstract: Data is often available in matrix form, in which columns are samples, and processing of such data often entails finding an approximate factorisation of the matrix into two factors. The first factor (the “dictionary”) yields recurring patterns characteristic of the data. The second factor (“the activation matrix”) describes in which proportions each data sample is made of these patterns. Nonnegative matrix factorisation (NMF) is a popular technique for analysing data with nonnegative values, with applications in many areas such as in text information retrieval, user recommendation, audio signal processing, and hyperspectral imaging. In a first part, I will give a short tutorial about NMF for data processing and introduce a general majorisation-minimisation framework for NMF with the beta-divergence, a continuous family of loss functions that takes the quadratic loss, KL divergence and Itakura-Saito divergence as special cases. Secondly, I will present applications for hyperspectral unmixing in remote sensing and factor analysis in dynamic PET, introducing robust variants of NMF that account for outliers, nonlinear phenomena or specific binding. References: C. Févotte, J. Idier. Algorithms for nonnegative matrix factorization with the beta-divergence. Neural computation, 2011. C. Févotte, N. Dobigeon. Nonlinear hyperspectral unmixing with robust nonnegative matrix factorization. IEEE Transactions on Image Processing, 2015. Y. C. Cavalcanti, T. Oberlin, N. Dobigeon, C. Févotte, S. Stute, M. J. Ribeiro, C. Tauber. Factor analysis of dynamic PET images: beyond Gaussian noise. IEEE Transactions on Medical Imaging, 2019.
Slides

June 11th, 2020, Laurent Massoulié (MSR-INRIA)
Title: From tree matching to graph alignment
Abstract: In this work we consider alignment of sparse graphs, for which we introduce the Neighborhood Tree Matching Algorithm (NTMA). For correlated Erdős-Rényi random graphs, we prove that the algorithm returns -- in polynomial time -- a positive fraction of correctly matched vertices, and a vanishing fraction of mismatches. This result holds with average degree of the graphs in O(1) and correlation parameter s bounded away from 1, conditions under which random graph alignment is particularly challenging. As a byproduct of the analysis we introduce a matching metric between trees and characterize it for several models of correlated random trees. These results may be of independent interest, yielding for instance efficient tests for determining whether two random trees are correlated or independent. Joint work with Luca Ganassali.
Slides

November 14th, 2019, Shiro Ikeda (The Institute of Statistical Mathematics, Tokyo)
Title: Data science for the EHT blackhole shadow imaging
Abstract: Last April, the EHT (Event Horizon Telescope) collaboration released the first image of the M87 black hole shadow. EHT is a huge VLBI (very long baseline interferometer) and an inverse problem must be solved in order to have an image. We have developed a new imaging method for EHT and it is one of the three methods which contributed to the released image. In this talk, I explain how the method has been developed and the final image was created.

June 6th, 2019, Eric Chassande-Mottin (CNRS/IN2P3 Astroparticule et Cosmologie APC, Univ Paris Diderot)
Title: Two black holes in the haystack
Abstract: On April 1, 2019, the gravitational wave detectors advanced LIGO and advanced Virgo started their third observation campaign. They have since detected about ten candidate signals associated with the merger of pairs of compact astrophysical objects (black holes and neutron stars). The analysis that will eventually confirm their exact nature and astrophysical origin is still in progress. Fruit of an experimental feat rewarded by the 2017 Nobel Prize in Physics, this result also relies on a range of signal analysis techniques and procedures that we will review.

March 28th, 2019, André Ferrari (Lagrange Laboratory (Nice))
Title: Image reconstruction for radio astronomy
Abstract: After a general introduction to radio astronomy, including the future SKA (Square Kilometer Array), the presentation will focus on image reconstruction in radio-interferometry with a particular attention to the derivation of the measurement equation. We will then present the MUFFIN (MUlti Frequency image reconstruction For radio INterferometry) algorithm which aims to reconstruct the sky at multiple wavelengths. MUFFIN allows a parallel computation of the most demanding computational steps and an automatic tuning of the regularization strengths. Simulation results based on realistic data will be presented.

January 10th, 2019, Pablo Jensen (ENS Lyon / IXXI)
Title: The unexpected link between neural nets and liberalism
Abstract: Sixty years ago, Arthur Rosenblatt, a psychologist working for the army invented the perceptron, the first neural network capable of learning. Unexpectedly, Rosenblatt cites, as a major source of inspiration, an economist: Friedrich Hayek. He is well-known for his 1974 Nobel prize… and by his ultra-liberal stances, justifying the Pinochet coup in a Chilean newspaper: «Personally, I prefer a liberal dictator to a democratic government that lacks liberalism». This talk presents ongoing work on the link between Hayek’s ideology and neural networks.