Date: Tuesday 25 February 2025 – 15:00 (Europe/London)
Speaker: Simone Scardapane, Tenure-track assistant professor at Sapienza University of Rome
Abstract
Explainable AI is a set of tools and techniques to understand and debug neural
network models. In this talk we will overview some of the most common approaches, ranging from input attribution (e.g., saliency maps) to data attribution and to the recent ideas of mechanistic interpretability. We will list open challenges and issues (e.g., polysemanticity), especially in contexts of scientific analysis and discovery in high-energy physics. We will close with ideas and trends for future research in the area.
![](/event/735/attachments/4109/5808/Simone.jpg)
Biography
Simone Scardapane is a tenure-track assistant professor in Sapienza University of Rome. His current research focuses on explainability and modularity of neural networks, and their application to scientific domains such as high-energy physics, archaeology, and medicine. He has published more than 130 papers in the field of deep learning in internal journals and conferences (including NeurIPS, ICML, ICLR, and AAAI). Among others, he serves as area chair for NeurIPS and ICLR, and as action editor for Transactions on Machine Learning Research, IEEE Transactions on Neural Networks and Learning Systems, and Neural Networks. He is a junior fellow of the Sapienza School of Advanced Studies, an affiliate researcher at the Italian Institute of Nuclear Physics, and a member of the ELLIS society. He is also involved in the active dissemination of AI in the public via talks and posts. In the past, he served as co-founder and chair of a no-profit association (the Italian Association for Machine Learning) and he also hosted Meetups and podcasts.