4213 Privacy Preserving Machine Learning
HT (starting 2025), MSc Cyber-Security
6 ECTS, Lecture + Seminar
Link to the Course in ILIAS - TBA
Current advances in AI and machine learning allow us to automate certain tasks or enhance the obtained results. However, processing personal data without appropriate safety measures can also lead to undesirable disclosure of sensitive information. Indeed, machine learning models often contain precise information about individual data points that were used to train them. We thus have to protect the privacy of the individuals, trying at the same time to maximise the utility of the machine learning model.
This course starts by briefly illustrating the drawbacks of classic data anonymisation schemes and how machine learning models can leak sensitive information about individual data points. This will set the stage for differential privacy, a mathematical definition of privacy which comes with provides us with an algorithmic framework to design practical privacy preserving algorithms for data analytics and machine learning. We will introduce the formal definition of differential privacy, analyse its key properties, and then focus on practical implementations. To that end, we will introduce the main building blocks to construct private machine learning algorithms. Finally, we will present applications to private federated learning where several data owners collaborate to train a model without sharing their data.
In the seminar, students will be able to implement and evaluate current differential privacy and federated learning architectures. In addition, students will analyse, evaluate and discuss current challenges of privacy-compliant machine learning in order to find new solutions. To do this, they will research specialist literature and current publications.