Explainable Artificial Intelligence

Period of duration of course
Course info
Number of course hours
Number of hours of lecturers of reference
Number of hours of supplementary teaching

Type of exam



View lecturer details


Basic notions of machine learning


Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for the lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, clarity and understanding. Explainable AI addresses such challenges and for years different AI communities have studied such topic, leading to different definitions, evaluation protocols, motivations, and results.  We motivate the needs of XAI in real-world and large-scale application, while presenting state-of-the-art techniques and best practices, as well as discussing the many open challenges. An XAI platform with collection of many of the recently proposed algorithms will be presented on specific use cases and it will be possible familiarize  with some of the methods.

The course is organized as follows in two modules: i) an introductory one providing motivations, main concepts and main methods; ii) an advanced one where the students will actively participate to monographs topics with readings interleaved with interventions of international scholars working on the sector. The schedule will depend also by the availability of the international speaker that will be involved.

Module1 (8 hours):

1)      crush course on XAI (4 hours).

  1.       Motivation for XAI
  2.      What is an explanation
  3.       The taxonomy of XAI methods for Machine Learning
  4.      Overview of post-hoc explanation methods
  5.       Overview of transparent by-design methods

2)     Hands-on: on XAI methods (4 hours). The students will be introduced to python library of XAI methods provided by the ERC project XAI Module2 (12 hours):

3) Advanced Topics

  1.        The role of explainability in the novel ML process: the assessment guidelines for trustworthy AI (possible invited speaker Virginia Dignum - Umea Univ.)
  2.        Contrastive Reasoning: counterfactual a causality (students’ seminars)
  3.        Explaining by design - with prototypes  (possible invited speaker Cynthia Rudin and/or students’ seminars) - Duke Univ.
  4.        Explaining  by design - with argumentation and knowledge graph - (possible invited speaker Francesca Toni  and/or students’ seminars)
  5.        Explainable AI- post hoc and other challenges:  students’ seminars to be selected by a set of proposed papers.
  6.        Explaining  by design -  On the integration of symbolic and sub-symbolic (possible invited speaker Omicini and/or student seminars)

Educational aims

O1: This course provides a reasoned introduction to the work of Explainable AI (XAI) to date, and surveys the literature with a focus on machine learning and symbolic AI-related approaches.

O2: To familiarize with many of the recently proposed methods and relative algorithms on specific use cases

Bibliographical references

1)     Tim Miller Explanaition in Artificial Intelligence: Insight from Social Science

2)     Causal Interpretability Survey, 2018, R.  Moraffah,  M.  Karami,  R.  Guo,  A.  Raglin,  &  H.  Liu (2020).   Causal  interpretability for machine learning - problems,  methods and evaluation. SIGKDD Explorations, 22(1):18–33. www.kdd.org/exploration/Causal_Explainability.pdf

3)     Counterfactual Explanation Survey - S. Verma, J. P. Dickerson, K. Hines (2020). Counterfactual Explanations for Machine Learning: A Review. CoRR abs/2010.10596

4)     Symbolic Techniques for XAI Survey .R. Calegari, G. Ciatto, A. Omicini (2020). On the integration of symbolic and sub-symbolic techniques for XAI: A survey. Intelligenza Artificiale 14(1): 7-32

5)     Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), 93