Offre en lien avec l’Action/le Réseau : – — –/– — –
Laboratoire/Entreprise : LS2N
Durée : 6 mois
Contact : francois.queyroi@univ-nantes.fr
Date limite de publication : 2025-02-14
Contexte :
Many studies have shown that learning models can lead to inequality of treatment and unfair decisions. A decision algorithm is often said to be “unfair” if it’s outcome depends (even indirectly) on some protected attribute (e.g. race, gender, etc.). In much of the literature, however, the protected attributes are mostly discrete, encoding the fact that an individual belongs (or does not) belong to one or more groups. A challenge in this context is to take into account the intersectionality of possible discriminations faced by individuals.
Sujet :
The aim of this project is to explore alternatives to the use of discrete variables to encode sensitive attributes. One possible way is to use a graph (the sensitive network ) to encode proximity/relationship between individuals. In this context, fairness could be defined as the lack of correlation between the existence of relationships and the decision/score. An intuitive example of an “unfair decision” is hiring only people who know the same people in the network.
The objectives of this internship are to
1. Develop a state-of-the-art on alternative notions of algorithmic fairness in the context
of intersectionality.
2. Reformulate well-known definitions of group fairness in the context of simple sensitive networks.
3. Find potential case studies and datasets in order to start a benchmark.
4. Implement measures of network fairness and evaluate them on the datasets.
Profil du candidat :
M2 mathematics/computer science student (or equivalent) with an inter-
est and skills in data analysis, graph mining and fairness in machine learning. A background in the humanities (sociology, philosophy, etc.) is a big plus
Formation et compétences requises :
Adresse d’emploi :
Polytech Nantes, Rue Christian Pauc, 44300 Nantes
Document attaché : 202411251412_Sujet_Stage_GraphFairness_2025.pdf