Quantifying Fairness and Discrimination in Predictive Models

The article Quantifying Fairness and Discrimination in Predictive Models was just published in Machine Learning for Econometrics and Related Topics, Springer.

The analysis of discrimination has long interested economists and lawyers. In recent years, the literature in computer science and machine learning has become interested in the subject, offering an interesting re-reading of the topic. These questions are the consequences of numerous criticisms of algorithms used to translate texts or to identify people in images. With the arrival of massive data, and the use of increasingly opaque algorithms, it is not surprising to have discriminatory algorithms, because it has become easy to have a proxy of a sensitive variable, by enriching the data indefinitely. According to [69], “technology is neither good nor bad, nor is it neutral”, and therefore, “machine learning won’t give you anything like gender neutrality ‘for free’ that you didn’t explicitely ask for”, as claimed by [61]. In this article, we will come back to the general context, for predictive models in classification. We will present the main concepts of fairness, called group fairness, based on independence between the sensitive variable and the prediction, possibly conditioned on this or that information. We will finish by going further, by presenting the concepts of individual fairness. Finally, we will see how to correct a potential discrimination, in order to guarantee that a model is more ethical.


OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (June 4, 2024). Quantifying Fairness and Discrimination in Predictive Models. Freakonometrics. Retrieved October 5, 2024 from https://doi.org/10.58079/11rgo


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.