[SEMINAR] DATAIA / GS ISN
![[SEMINAR] DATAIA / GS ISN](/sites/default/files/2025-05/Template%20autres%20se%CC%81minaires.png)
From Probabilistic Testing to Certifiable AI: Large Language Models and Neuro-Symbolic Reasoning for Verifiable Autonomous Systems
Abstract
Learning-enabled Cyber-Physical Systems (LE-CPS), such as autonomous vehicles anddrones, face critical safety and reliability challenges due to the stochastic nature of deep neural networks. In our FSE’22 and TSE’23 studies, we conducted an in-depth investigation into industry testing practices, uncovering significant gaps between current testing techniques and the needs for regulatory assurance. To address these challenges, we introduced two pioneering approaches: test reduction for ROS-based multi-module autonomous driving systems, and scenario-based construction for checking traffic rule compliance in autonomous driving systems.
This talk will then present our recent progress in bridging probabilistic testing with formal reasoning. I will first cover our FSE’24 work on reviving model-based testing with Large Language Models (LLMs), followed by TSE’24 and ICSE’25 efforts on LLM-driven scenario generation and online testing for uncrewed drone autolanding. Finally, I will introduce our FSE’25 paper NeuroStrata, which leads a neurosymbolic shift from black-box machinelearning to white-box, human-understandable reasoning, aiming to improve interpretability, testability, and certification of AI components in safety-critical CPS.
Biography
A/Prof. Xi Zheng earned his Ph.D. in Software Engineering from the University of Texas at Austin in 2015. He is awarded Australian Research Council Future Fellow in 2024. Between 2005 and 2012, he was the Chief Solution Architect for Menulog Australia. Currently, he occupies several leadership roles at Macquarie University, Australia: Director of the Intelligent Systems Research Group (ITSEG.ORG), Director of International Engagement in the School of Computing, Associate Professor and Deputy Program Leader in Software Engineering. His research areas include Cyber-Physical Systems Testing and Verification, Safety Analysis, Distributed Learning, Internet of Things, and the broader spectrum of Software Engineering. A/Prof. Zheng has successfully secured over $2.4 million in competitive funding from the Australian Research Council (1 Future Fellow, 2 Linkages and 1 Discovery) and Data61 (CRP) projects focused on safety analysis, model testing and verification, and the development of trustworthy AI for autonomous vehicles. He has been recognized with several awards, including the Deakin Industry Researcher Award (2016) and the MQ Early Career Researcher Award (Runner-up 2020). His academic contributions include numerous highly cited papers and best conference paper awards. He has served as a Program Committee member for leading Software and System conferences, such as ICSE (2026), FSE (2022, 2024) and PerCom (2017-2025), and as PC chair for IEEE CPSCom-2021 and IEEE Broadnets-2022.
Additionally, he has taken on the role of associate editor for ACMDistributed Ledger Technologies and editor for the Springer Journal of Reliable Intelligent Environments. In 2023, A/Prof. Zheng is a visiting professor at both UCLA and UT Austin and co-founder of the international workshop on trustworthy autonomous cyber-physical systems. A/Prof. Zheng is a leading co-organizer for the Shonan Meeting (Seminar No.235) on “LLM-Guided Synthesis, Verification, and Testing of Learning-Enabled CPS” in March 2026 and the Dagstuhl Seminar (202501048) on “Advancing Testability and Verifiability of CPS with Neurosymbolic and Large Language Models” in October 2026.
Practical Information
The seminar is also available via the following link: Zoom.
- Meeting ID: 983 1464 3541
- Passcode: 967004