9th ONLINE MEETUP with Prof. Ntoutsi, new AI professor at FI CODE, on Monday, 6 Feb 2023

06. 02. 2023 | 16.00 Uhr - 17.15 Uhr

More about the Machine Learning Interest Group at https://www.unibw.de/vis/mlig

We would like to invite you to our 9th ONLINE MEETUP with three talks by the new professorship of AI and ML of Prof. Ntoutsi from FI CODE.

Since August 2022 Prof. Eirini Ntoutsi is professor and head of the AIML group at UniBw M. Before that she was a professor at FU Berlin and University of Hannover. Her research interests are in fairness, adaptive learning, and generative AI.

 

About our ONLINE MEETUPs

- We will have three 15 minutes talks about current ML topics
- Afterwards, there will be a "breakout room" for (almost open-end) discussion for each presentation. This allows you to discuss a topic in a smaller group.
- MLIG depends on you! If you want to present a ML project, publication, algo, etc. don't hesitate to contact us

TALKS

- Prof. Eirini Ntoutsi (INF/CODE): An overview of fairness-aware Machine Learning
- Arjun Roy (FU Berlin): Fairness in multi-task learning
- Tai Le Quy (FU Berlin): Fairness-aware clustering models in collaborative learning

Details

Talk 1: An overview of fairness-aware Machine Learning (Prof. Dr. Eirini Ntoutsi)

AI-driven decision-making has already penetrated into almost all spheres of human life, from content recommendation and healthcare to predictive policing and autonomous driving, deeply affecting everyone, anywhere, anytime. The discriminative impact of AI-driven decision-making on certain population groups has been already observed in a variety of cases leading to an ever-increasing public concern about the impact of AI in our lives. The domain of fairness-aware machine learning focuses on methods and algorithms for understanding, mitigating, and accounting for bias in AI/ML models.
This talk provides an overview to the domain, including open issues and challenges.

* https://wires.onlinelibrary.wiley.com/doi/epdf/10.1002/widm.1356
* https://wires.onlinelibrary.wiley.com/doi/pdf/10.1002/widm.1452

 

Talk 2: Fairness in multi-task learning (M.Sc. Arjun Roy)

The fairness implications of multi-task learning (MTL) have only recently surfaced in the literature that tackles fairness-accuracy trade-off for each task and the performance trade-off among different tasks. However, the fairness-accuracy balance and the inter-task performance balance may change over time. Instead of a rigid trade-off learning, we propose a noble flexible optimization approach and introduce the L2T-FMT algorithm that learns how to be fair in a MTL setting by selecting a objective (accuracy or fairness) to optimize at each step for each task as per need.
In literature we also see that in an MTL set-up sharing information equally among all the task may cause negative transfer of knowledge. Informed grouping of tasks to share knowledge at different level among different task has been found to overcome this challenge. However, all the existing MTL mechanism focuses only on accuracy/error of each task without any fairness goals. We first show that why a direct adaptation of such grouping strategy is not applicable to a fairness-aware MTL and then propose a noble fair-grouping method (L2G-FMT) which takes into account the multi-objective nature of the fair-accurate learning for each task.
Experimentally, we demonstrate the superiority of our proposed approaches over the state of the art baselines in producing fair-accurate multi-task learning.

* https://arxiv.org/pdf/2206.08403.pdf

 

Talk 3. Fairness-aware clustering models in collaborative learning (M.Sc. Tai Le Quy)

Teamwork is a popular activity in collaborative learning in education and is essential in improving students’ engagement in the classroom. In the traditional classroom, students are grouped into homogeneous and heterogeneous groups based on their knowledge levels to capture rich semantic information about the group. Clustering models have been applied to cluster students based on students’ information, including their demographic attributes; however, ML-driven decision-making can be biased w.r.t. protected attributes like gender or race. In this talk, we investigate the role and application of fairness-aware clustering models in collaborative activities in educational settings.

* https://educationaldatamining.org/EDM2021/virtual/static/pdf/EDM21_paper_184.pdf

 


VIRTUAL ACCESS

https://bbb.unibw.de/fhb-bnp-awc

Access code: 318969


Kontakt

Philipp J. Rösch

Infos & Anmeldung


Veranstalter:
MLIG
Ort:
Termin übernehmen: