« Le Séminaire Palaisien » | Mathurin Massias & Samuel Hurault
Each seminar session is divided into two scientific presentations of 40 minutes each: 30 minutes of talk and 10 minutes of questions. Mathurin Massias & Samuel Hurault will host the November 2025 session!
Registration is free but compulsory, subject to availability. A buffet will be served at the end of the seminar.
Modern deep generative models now produce high-quality synthetic samples, often indistinguishable from real training data. A growing body of research aims to understand why recent methods, such as diffusion and flow matching techniques, generalize so effectively. Proposed explanations include inductive biases in deep learning architectures and the stochastic nature of the conditional flow matching loss. In this work, we rule out the noisy nature of the loss as the main factor for generalization in flow matching. First, we empirically demonstrate that in high-dimensional settings, the stochastic and closed-form versions of the flow matching loss produce nearly equivalent losses. Then, using state-of-the-art flow matching models on standard image datasets, we demonstrate that both variants achieve comparable statistical performance, with the surprising finding that using the closed-form can even improve performance.
From to this preprint.
Sampling from an unknown distribution, accessible only by discrete samples, is a fundamental problem at the heart of generative AI. Current state-of-the-art methods follow a two-step process: first, the estimation of the score function (the gradient of a smoothed logarithmic distribution), and then the application of a diffusion sampling algorithm, such as Langevin or diffusion models. The accuracy of the resulting distribution can be influenced by four major factors: generalization and optimization errors in score matching, and discretization and minimal noise amplitude in diffusion. In this paper, we explicate the sampling error when using a diffusion sampler in a Gaussian context. We provide a precise analysis of the Wasserstein sampling error resulting from these four error sources. This allows us to rigorously monitor the interaction of the anisotropy of the data distribution (encoded by its power spectrum) with key parameters of the end-to-end sampling method, including the number of initial samples, the step sizes of score matching and diffusion, and the noise amplitude. In particular, we show that the Wasserstein sampling error can be expressed as a kernel-like norm of the data power spectrum, where the specific kernel depends on the method parameters. This result provides a basis for further analysis of the trade-offs required to optimize sampling accuracy.
From this preprint.