[đ„ WORKSHOP] "Mathematical Foundations of AI" - 6th Ă©dition
![[đ„ WORKSHOP] "Mathematical Foundations of AI" - 6th Ă©dition](/sites/default/files/2025-10/Workshop%20MathsIA_0.png)
Registration coming soon!
The âMathematical Foundations of AIâ day, organized jointly by the DataIA Institute and SCAI, in association with the scientific societies: the Jacques Hadamard Mathematical Foundation (FMJH), the Paris Mathematical Sciences Foundation (FSMP), the MALIA group of the French Statistical Society, and the Francophone Machine Learning Society (SSFAM), aims to provide an overview of some promising research directions at the interface between statistical learning and AI.
It is part of the Maths & AI network in the Ile-de-France region, of which the FMJH and DataIA are members.
This new edition will focus on issues of identifiability, whether for tensor analysis, neural networks, or generative AI. The day will feature three plenary presentations by renowned researchers and specialists in the field:
- François Malgouyres (University of Toulouse), specialist in tensors and tensor identifiability issues;
- Elisabeth Gassiat (Orsay Mathematics Laboratory), professor and leading statistician, who has conducted research on VAE identifiability issues;
- Pavlo Mozharovskyi (Télécom ParisTech), professor and recognized expert on explainability, with research conducted on concept-based learning.
This day is also an opportunity for young researchers to present their work through short presentations (see call for contributions).
Organizing Committee
- Marianne Clausel (Université de Lorraine)
- Emilie Chouzenoux (INRIA Saclay, Institut DataIA)
Â
Scientific Committee
- Ricardo Borsoi (CNRS, CRAN)
- Stéphane Chrétien (Univ. Lyon 2)
- Sylvain Le Corff (Sorbonne Université)
- Myriam Tami (CentraleSupélec)
Â
As part of the workshop, participants are invited to submit a detailed abstract for a possible oral or poster presentation. During the selection process, the committee wishes to give the best possible visibility to doctoral students, researchers, and teacher-researchers. When submitting your application by email (maths-ia@inria.fr), please include the following information: first and last name, institution, status, title/abstract.
Financial assistance for the mission may be granted by the committee upon justified request.
Application deadline: November 21, 2025.
Geometry-induced regularization and identifiability of deep ReLU networks
Abstract: The first part of the presentation will use a simple, educational example to introduce the mathematical results developed in the second part, in order to make the concept accessible to as many people as possible. Due to implicit regularization that favors âgoodâ networks, neural networks with a large number of parameters do not generally overfit. Related phenomena that are still poorly understood include the properties of flat minima, saddle-to-saddle dynamics, and neuron alignment. To analyze these phenomena, we study the local geometry of deep ReLU neural networks. We show that, for a fixed architecture, when the weights vary, the image of a sample X forms a set whose local dimension changes. The parameter space is thus partitioned into regions where this local dimension remains constant. The local dimension is invariant with respect to the natural symmetries of ReLU networks (i.e., positive scale changes and neuron permutations). We then establish that the geometry of the network induces regularization, with the local dimension constituting a key measure of regularity. Furthermore, we relate the local dimension to a new notion of flatness of minima as well as to saddle-to-saddle dynamics. For networks with a hidden layer, we also show that the local dimension is related to the number of linear regions perceived by $X$, which sheds light on the effect of regularization. This result is supported by experiments and linked to neuron alignment. Finally, I will present experiments based on MNIST, which highlight the regularization induced by geometry in this context. Finally, I will make the connection between properties of the local dimension and the local identifiability of the network parameters.
Â
Biography: François Malgouyres is a professor at the University of Toulouse (France). His research focuses on the theoretical and methodological foundations of deep learning, with a particular interest in understanding the mathematical structure of neural networks. He has worked on network geometry, parameter identifiability, function approximation using neural networks, weight quantization in recurrent networks, and the design of orthogonal convolutional layers. He has also taken an interest in the straight-through estimatorâthe reference algorithm for quantized weight optimizationâand its applications to sparse signal reconstruction. Before joining the University of Toulouse, François Malgouyres was a lecturer at Paris Nord University, a postdoctoral fellow at the University of California, Los Angeles (UCLA), and then a doctoral student at ENS Paris-Saclay (then located in Cachan).
10 - 10:30am | Coffee Break
Title (TBA)
Abstract:
Biography:
12:30 - 1:45pm | Lunch Break
Title (TBA)
Abstract:
Biography:
2:45 - 3:30pm | Sweet Break