Tag Archives: Wasserstein

Fairness in Multi-Task Learning via Wasserstein Barycenters, at ECML PKDD 2023

Toady, presentation of our paper Fairness in Multi-Task Learning via Wasserstein Barycenters, ECML PKDD, in Torino, by François. Slides are available online (and a poster can be found below)

The paper was actually published in Machine Learning and Knowledge Discovery in Databases: Research Track (295–312), available here.

Mitigating Discrimination in Insurance with Wasserstein Barycenters

Our new paper, with François Hu and Philipp Ratz, Mitigating Discrimination in Insurance with Wasserstein Barycenters is now available on ArXiv.

The insurance industry is heavily reliant on predictions of risks based on characteristics of potential customers. Although the use of said models is common, researchers have long pointed out that such practices perpetuate discrimination based on sensitive features such as gender or race. Given that such discrimination can often be attributed to historical data biases, an elimination or at least mitigation is desirable. With the shift from more traditional models to machine-learning based predictions, calls for greater mitigation have grown anew, as simply excluding sensitive variables in the pricing process can be shown to be ineffective. In this article, we first investigate why predictions are a necessity within the industry and why correcting biases is not as straightforward as simply identifying a sensitive variable. We then propose to ease the biases through the use of Wasserstein barycenters instead of simple scaling. To demonstrate the effects and effectiveness of the approach we employ it on real data and discuss its implications.

(fictitious maps used in the article)

Fairness in Multi-Task Learning via Wasserstein Barycenters

Our new paper, with François Hu and Philipp Ratz, Fairness in Multi-Task Learning via Wasserstein Barycenters, is now available.

Algorithmic Fairness is an established field in machine learning that aims to reduce biases in data. Recent advances have proposed various methods to ensure fairness in a univariate environment, where the goal is to de-bias a single task. However, extending fairness to a multi-task setting, where more than one objective is optimised using a shared representation, remains underexplored. To bridge this gap, we develop a method that extends the definition of Strong Demographic Parity to multi-task learning using multi-marginal Wasserstein barycenters. Our approach provides a closed form solution for the optimal fair multi-task predictor including both regression and binary classification tasks. We develop a data-driven estimation procedure for the solution and run numerical experiments on both synthetic and real datasets. The empirical results highlight the practical value of our post-processing methodology in promoting fair decision-making.

It will be presented in September, at the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2023), in Torino.