Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms. Explainable AI is used to describe an AI model, its expected impact and potential biases.
In this brief but intense school, we will start with learning how to design graph neural networks (GNNs) and then move to explainability methods, from saliency maps to data attributions. Concrete examples will be discussed spanning from fundamental particle physics to medical applications and neuroscience.
The School will feature lectures and presentations from physicists and computer scientists. Most importantly, participants will be invited to present briefly a project they are working on for which AI methods could be/are applied and, through an active learning approach, they will be able to discuss with experts most suitable XAI methods for their science case.
Invited keynote speaker: Professor Pietro Lio (Prof in Computational Biology in the AI division, member of the Cambridge centre of AI and Medicine and The European Laboratory for Machine Learning). He will present applications of AI in medicine, focusing on how to build a digital patient twin using graph and hypergraph representation learning and considering physiological (cardiovascular), clinical (inflammation) and molecular variables (multi omics and genetics).
The School is supported by the Multi-disciplinary Use Cases for Convergent new Approaches to AI explainability (MUCCA) project funded by CHIST-ERA. The School is targeted to PhD students and early career researchers who have already some knowledge of ML techniques. Support for traveling and/or accommodation can be provided on-demand (a selective process might be needed).