| Code |
18004
|
| Year |
1
|
| Semester |
S2
|
| ECTS Credits |
6
|
| Workload |
PL(30H)/T(30H)
|
| Scientific area |
Informatics
|
|
Entry requirements |
N/A
|
|
Learning outcomes |
The course aims to introduce the fundamental concepts for the development and evaluation of Artificial Intelligence (AI) systems that are transparent, reliable, fair, and robust. The goal is for students to understand the techniques for interpreting complex models and mitigating biases. In addition, concepts of causality and methodologies for causal inference will be addressed, which are essential for ensuring informed and ethical decisions. At the end of the course, students should be able to: a. Understand the principles of responsible and reliable AI (transparency, fairness, robustness, security); b. Apply global and local interpretability techniques in machine learning models; c. Identify and mitigate biases in data and models; d. Assess risks associated with adversarial attacks and uncertainty in machine learning models; e. Understand the fundamentals of causality and apply causal inference methods to observational data.
|
|
Syllabus |
A. Transparency: - Global interpretability techniques: Ceteris Paribus, ICE, PDP, ALE - Local interpretability techniques: Shapley Values, SHAP, LIME - Interpretability in CNNs: Filter and activation visualization, Activation maximization, Network Dissection B. Bias and Fairness: - Types of Bias - Metrics and mitigation measures C. Reliability, Robustness, and Security: - Adversarial attacks - Uncertainty estimation: Gaussian Process Regression - Human Supervision: HITL, HOTL, HIC, Human-out-of-the-Loop D. Causality: - Introduction to causality: Causality vs. Correlation - Potential Outcomes Framework, Average Treatment Effect, RCTs - Estimating causal effect in observational data: Propensity Score Matching, Instrumental Variables
|
|
Main Bibliography |
Molnar, C. (2025). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable (3rd ed.). Munn, M., & Pitman, D. (2022). Explainable AI for Practitioners: Designing and Implementing Explainable ML Solutions. Sebastopol, CA: O’Reilly Media. Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning: Limitations and Opportunities. Cambridge, MA: fairmlbook.org. Rasmussen, C. E., & Williams, C. K. I. (2006). Gaussian Processes for Machine Learning. Cambridge, MA: MIT Press. Verbeke, W., Baesens, B., De Smedt, J., De Weerdt, J., & Weytjens, H. (2025). AI for Business: From Data to Decisions. (Preliminary version). AI for Business Pearl, J. (2009). Causality: Models, Reasoning, and Inference. Cambridge University Press
|
|
Teaching Methodologies and Assessment Criteria |
Teaching methodologies: • Theoretical classes; • Practical laboratory classes; • Individual projects; • Tutoring to clarify doubts and accompany students in the development of their projects.
Assessment methods and criteria: The theoretical and practical components are assessed using two main elements: - a written test (T) to assess knowledge, accounting for 70% of the final grade; - an individual practical assignment with a report on its execution and presentation, accounting for 30% of the final grade. Teaching-Learning Classification (CEA) = 0.7T + 0.3TP Admission to the final exam: CEA >= 6 points (UBI regulations).
|
|
Language |
Portuguese. Tutorial support is available in English.
|