VÜ: Responsible Artificial Intelligence
The discriminatory effects of AI-based decision making on certain populations have already been observed in a number of cases, leading to increasing public concern about the impact of AI on our lives; moreover, the complexity of AI models is increasing, making it difficult to understand how decisions are made and whether the models are learning meaningful patterns from the data.
The field of responsible AI has recently emerged in an attempt to put humans at the center of AI-based systems by considering aspects such as fairness, explainability, reliability, and privacy of AI systems. This course will cover various aspects of responsible AI, with a focus on fairness-aware machine learning and explainable AI (XAI). By the end of the course, you will have learned how to incorporate responsibility aspects such as fairness and XAI into the design and application of AI
Course content (subject to change):
- Responsible AI aspects
- Fairness-aware learning
- Explainable AI
- Responsibility aspects in AI/ML pipelines
Literature:
- Virginia Dignum, Responsible Artificial Intelligence - How to Develop and Use AI in a Responsible Way, Springer, 2019.
- Solon Barocas, Moritz Hardt, Arvind Narayanan, FAIRNESS AND MACHINE LEARNING Limitations and Opportunities, online, 2022.
- Christopher Molnar, Interpretable Machine Learning - A Guide for Making Black Box Models Explainable, online, 2022.