Research at the DataIA Institute

The DataIA Institute is the scientific and academic hub for artificial intelligence research in Paris-Saclay: it is both the driving force and the conductor, stimulating, connecting, and amplifying the research strengths of its ecosystem and partners. Its goal is to promote and accelerate the emergence of application-oriented projects by enabling the socio-economic world (associations, companies, institutions, etc.) to contribute to them.
The DataIA-Cluster research program focuses on AI-Core, i.e., the scientific foundation of AI, such as fundamental algorithms, machine learning and deep learning, probabilistic and statistical models, natural language processing (NLP), computer vision, and symbolic and logical methods. The work focuses on applications targeting three interdisciplinary areas: mathematics, physics, and medicine:
- AI & Mathematics: Mathematics is the foundation of AI (statistics, probability, optimization, geometry, logic). Research in this area aims to invent new fundamental methods to make AI more reliable, robust, and explainable.
Example: Creating new probabilistic models capable of handling uncertainty in data. - AI & Physics: This involves using AI as a tool to explore complex physical systems, but also drawing inspiration from physics to invent new AI models.
Example: creating hybrid AI–physical equation models (“physics-informed AI”) capable of better predicting phenomena (climate, fluid dynamics, materials). - AI & Medicine: this is AI applied to life and health sciences, but with genuine fundamental research to ensure reliability, transparency, and clinical acceptability.
Example: developing AI models for medical diagnosis based on images (radiology, histology).
This research focuses on three scientific issues:
- How does AI learn? (learning paradigms)
Learning paradigms = how do machines really learn? Our work focuses on exploring new ways of training AI, inventing more efficient and effective learning methods, and moving towards AI that consumes less data and energy. - How can we use what we already know? (known equations)
Known equations = how does AI work with the laws of science? Research projects aim to integrate equations from physics, biology, economics, etc. directly into AI. In these disciplines, equations are the universal language for expressing knowledge. The core of this issue is to create hybrid models that respect both data and established theories in other fields. The goal is to develop AI capable of more reliable and interpretable predictions. - Can we trust the results? (reliability)
Reliability = how can we be sure that AI is trustworthy? It's about developing robust AI systems that are capable of communicating how much the user can trust the results they produce. Developing reliable AI also means working on explainability, to understand the choices made by an algorithm, whether those choices have been skewed by bias or not, and designing tests and safeguards, much like quality controls for AI systems. This is essential in many fields such as healthcare, justice, energy, and security—where an error can have serious consequences. In short, it's about building AI that we can trust.
If you have any questions, please contact: Demian Wassermann (demian.wassermann@inria.fr, Scientific Director of the DataIA Institute, Director of Research at Inria (MIND team)