DATAIA Seminars

Le Séminaire Palaisien | Machine Learning and Statistics

Bandeau image
Date de tri
Lieu de l'événement
ENSAE - Amphi 200


Le Séminaire Palaisien gathers, every first Tuesday of the month, the vast research community of Saclay around statistics and machine learning.
Corps de texte

Each seminar session is divided into 2 scientific presentations of 40 minutes each: 30 minutes of presentation and 10 minutes of questions, followed by a coffee break.

Pierre Laforgue (Télécom Paris) and Sylvain Arlot (Université Paris-Sud, Inria), will lead the session of December 2nd.

« On the Dualization of Operator-Valued Kernel Machines » - Pierre Laforgue
Corps de texte

Operator-Valued Kernels (OVKs) provide an elegant way to extend scalar kernel methods when the output space is a Hilbert space. If the output space if finite dimensional, this framework naturally allows to tackle multi-class classification or multi-task regression problems. But its ability to deal with infinite dimensional output spaces opens the door to many more applications, such as structured output prediction, structured representation learning, or functional regression. This work investigates how to use the duality principle to handle different families of loss functions, yet unexplored within OVK machines. The difficulty of having infinite dimensional dual variables is overcome by means of a Double Representer Theorem, that will be explicited. This allows for instance to handle Ɛ-insensitive and Huber losses, which are of particular interest in the context of surrogate approaches.

This is a joint work with Alex Lambert, Luc Brogat-Motte and Florence d'Alché-Buc from Télécom Paris. The preprint is available at

« Analysis of some Purely Random Forests » - Sylvain Arlot
Corps de texte

Random forests (Breiman, 2001) are a very effective and commonly used statistical method, but their full theoretical analysis is still an open problem. As a first step, simplified models such as purely random forests have been introduced, in order to shed light on the good performance of Breiman's random forests.

In the regression framework, the quadratic risk of a purely random forest can be written as the sum of two terms, which can be understood as an approximation error and an estimation error. Robin Genuer (2010) studied how the estimation error decreases when the number of trees increases for some specific model. In this talk, we study the approximation error (the bias) of some purely random forest models in a regression framework, focusing in particular on the influence of the size of each tree and of the number of trees in the forest.

Under some regularity assumptions on the regression function, we show that the bias of an infinite forest decreases at a faster rate (with respect to the size of each tree) than a single tree. As a consequence, infinite forests attain a strictly better risk rate (with respect to the sample size) than single trees.

This talk is based on joint works with Robin Genuer.

Informations Pratiques
Corps de texte

The seminar will take place on December 2  from 4pm to 5.30pm at ENSAE Amphitheatre 200

It will be followed by a coffee break.

Registration free but mandatory within the limit of available seats.
For security reasons, no access to the conference room for unregistered participants