Catégorie
Le Séminaire Palaisien

« Le Séminaire Palaisien » | Francis Bach & Alexei Grinbaum

Bandeau image
Séminaire Le Palaisien
Date de tri
Lieu de l'événement
ENSAE - Salle 1001

Partager

twlkml
Chapo
Le séminaire Palaisien réunit, chaque premier mardi du mois, la vaste communauté de recherche de Saclay autour de la statistique et de l'apprentissage machine.
Contenu
Corps de texte

Chaque session du séminaire est divisée en deux présentations scientifiques de 40 minutes chacune : 30 minutes d’exposé et 10 minutes de questions.

Francis Bach et Alexei Grinbaum animeront la première session de rentrée 2023 !


Inscriptions gratuites mais obligatoires, dans la limite des places disponibles. Un buffet sera servi à l'issue du séminaire.

En savoir plus
Nom de l'accordéon
Francis Bach | "Chain of Log-Concave Markov Chains"
Texte dans l'accordéon

Résumé : Markov chain Monte Carlo (MCMC) is a class of general-purpose algorithms for sampling from unnormalized densities. There are two well-known problems facing MCMC in high dimensions: (i) The distributions of interest are concentrated in pockets separated by large regions with small probability mass, and (ii) The log-concave pockets themselves are typically ill-conditioned. We introduce a framework to tackle these problems using isotropic Gaussian smoothing. We prove one can always decompose sampling from a density (minimal assumptions made on the density) into a sequence of sampling from log-concave conditional densities via accumulation of noisy measurements with equal noise levels. This construction keeps track of a history of samples, making it non-Markovian as a whole, but the history only shows up in the form of an empirical mean, making the memory footprint minimal. We study our sampling algorithm quantitatively using the 2-Wasserstein metric and compare it with various Langevin MCMC algorithms. Joint work with Saeed Saremi and Ji Won Park (https://arxiv.org/abs/2305.19473).

Nom de l'accordéon
Alexei Grinbaum | "Generative AI: from statistical physics to ethics"
Texte dans l'accordéon

Résumé : This talk intertwines a quick reminder of the most salient open problems in the transformer architecture with a questioning of ethical and societal impacts of large language models. We’ll discuss emergent behaviours in LLMs and their scientific understanding (or lack thereof) as critical phenomena. We’ll dwell on the example of ‘digital whales’ to get a feeling of how much room is available down there in the LLM vector space for non-human use of language. And we’ll conclude by comparing machines that speak our language with non-human entities endowed with the same capacity in myth. What lessons can we draw for generative AI from angels, demons, gods, and oracles?