Tag Archives: fairness

Algorithmic fairness with optimal transport: quantifying counterfactual fairness and mitigating group fairness

This Friday, I will be in Laval University, in Québec, to give a talk at the Statlab annual day.

In this talk, we present two complementary approaches to addressing fairness in algorithmic decision-making, regarding individual and group fairness. First, we use Wasserstein barycenters to obtain (strong Demographic Parity) with one or multiple sensitive features. Our method provides a closed-form solution for the optimal, sequentially fair predictor, enabling possible interpretation of correlations between sensitive attributes. Then, we introduce a novel method that links two existing counterfactual approaches: causal graph-based adaptations (Plečko and Meinshausen, 2020) and optimal transport (De Lara et al., 2024). By extending “Knothe’s rearrangement” (Bonnotte, 2013) and “triangular transport” (Zech and Marzouk, 2022) to probabilistic graphical models, we propose a new group framework, termed sequential transport, which we apply to the problem of individual fairness. Theoretical foundations are established, followed by numerical demonstrations on synthetic and real datasets.

Slides are available online.

Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness

Our paper “Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness“, written with Agathe Fernandes Machado and Ewen Gallic, is now online

In this paper, we link two existing approaches to derive counterfactuals: adaptations based on a causal graph, as suggested in Plečko and Meinshausen (2020) and optimal transport, as in De Lara et al. (2024). We extend “Knothe’s rearrangement” Bonnotte (2013) and “triangular transport” Zech and Marzouk (2022) to probabilistic graphical models, and use this counterfactual approach, referred to as sequential transport, to discuss individual fairness. After establishing the theoretical foundations of the proposed method, we demonstrate its application through numerical experiments on both synthetic and real datasets.

Insurance, Biases, Discrimination and Fairness

Insurance, Biases, Discrimination and Fairness was published a few weeks ago. I still plan to spend some time this summer on the R package, including data and some functions…

This book offers an introduction to the technical foundations of discrimination and equity issues in insurance models, catering to undergraduates, postgraduates, and practitioners. It is a self-contained resource, accessible to those with a basic understanding of probability and statistics. Designed as both a reference guide and a means to develop fairer models, the book acknowledges the complexity and ambiguity surrounding the question of discrimination in insurance. In insurance, proposing differentiated premiums that accurately reflect policyholders’ true risk—termed “actuarial fairness” or “legitimate discrimination”—is economically and ethically motivated. However, such segmentation can appear discriminatory from a legal perspective. By intertwining real-life examples with academic models, the book incorporates diverse perspectives from philosophy, social sciences, economics, mathematics, and computer science. Although discrimination has long been a subject of inquiry in economics and philosophy, it has gained renewed prominence in the context of “big data,” with an abundance of proxy variables capturing sensitive attributes, and “artificial intelligence” or specifically “machine learning” techniques, which often involve less interpretable black box algorithms.

The book distinguishes between models and data to enhance our comprehension of why a model may appear unfair. It reminds us that while a model may not be inherently good or bad, it is never neutral and often represents a formalization of a world seen through potentially biased data. Furthermore, the book equips actuaries with technical tools to quantify and mitigate potential discrimination, featuring dedicated chapters that investigate into these methods.

SCOR Foundation – Scope and limits of Artificial intelligence

On May 15, 2024, the SCOR Foundation for Science hosted a webinar titled “Scope and limits of Artificial intelligence”, delivered by Arthur Charpentier. A professor in the Department of Mathematics at the University of Quebec in Montreal and a member of the Institute of Actuaries, Arthur Charpentier is an internationally recognized expert in actuarial science and the author of numerous academic articles published in the best actuarial academic journals worldwide.

During the webinar, Arthur Charpentier discussed the research project “Fairness of predictive models: an application to insurance markets”, which is supported by the SCOR Foundation for Science. This project addresses biases within the automatic artificial intelligence algorithms utilized to determine optimal pricing in individual policies. Its aim is to mitigate or eliminate such biases, which could lead to inequities or discriminatory practices based on factors such as gender, race, religion, or origin in the coverage provided by insurers or reinsurers to policyholders.

Quantifying Fairness and Discrimination in Predictive Models

The article Quantifying Fairness and Discrimination in Predictive Models was just published in Machine Learning for Econometrics and Related Topics, Springer.

The analysis of discrimination has long interested economists and lawyers. In recent years, the literature in computer science and machine learning has become interested in the subject, offering an interesting re-reading of the topic. These questions are the consequences of numerous criticisms of algorithms used to translate texts or to identify people in images. With the arrival of massive data, and the use of increasingly opaque algorithms, it is not surprising to have discriminatory algorithms, because it has become easy to have a proxy of a sensitive variable, by enriching the data indefinitely. According to [69], “technology is neither good nor bad, nor is it neutral”, and therefore, “machine learning won’t give you anything like gender neutrality ‘for free’ that you didn’t explicitely ask for”, as claimed by [61]. In this article, we will come back to the general context, for predictive models in classification. We will present the main concepts of fairness, called group fairness, based on independence between the sensitive variable and the prediction, possibly conditioned on this or that information. We will finish by going further, by presenting the concepts of individual fairness. Finally, we will see how to correct a potential discrimination, in order to guarantee that a model is more ethical.

Fresh from the oven…

14 litres d’encre de chine, 30 pinceaux, 62 crayons à mine grasse, 1 crayon à mine dure, 27 gommes à effacer, 38 kilos de papier, 16 rubans de machine à écrire, 2 machines à écrire, 67 litres de bière ont été nécessaires à la réalisation de cette aventure…

(Goscinny and Uderzo (1965*), Astérix et Cléopâtre)

Almost better than hot, freshly baked bagels…

the textbook Insurance, Biases, Discrimination and Fairness is now out, and just arrived today ! Even though I’ve spent so much time re-reading it, getting nauseous, checking references, quotes, reworking graphics, re-launching codes, etc., it’s still an immense feeling of pride to open your book for the very first time.

Astérix et Cléopâtre est le dernier Astérix de la fameuse Collection Pilote, comme me le rappelait Michel Bera (professeur émérite du CNAM, rattaché à la Chaire de modélisation statistique du risque, et mémoire de la bande dessinée francophone, le “B” du fameux “BDM”, Trésors de la bande dessinée). “Lorsque la collection Pilote a basculé en éditions avec les titres des seuls Asterix dans le menhir, je pense que la phrase a disparu“… C’était la version qui était chez mes grands parents, et que je (re)dévorais, tous les ans, quand j’étais petit.

“Scope and limits of artificial intelligence” at the SCOR foundation monthly webinar

This morning, I will give a talk on “scope and limits of artificial intelligence” at the SCOR foundation monthly webinar. As discussed previsously, we currently have ongoing research on discrimination and fairness founded by the fondation (newsletter #1 is online).

Insurance (and further motivations)

Since we will talk about fairness, I will start with a couple of motivations. The first one is about COMPAS,

Interestingly, we have the data to analyse that one. In the original analysis, conditional on non-re-offending, proportions of being wrongly classified in the two protected groups are significantly different, so the algorithm is racist,

The answer was that actually, conditional on being classified as high risk, the probability of re-offense in the two protected groups are significantly similar, so the algorithm is not racist,

So clearly, we can start to see that it will not be so easy, since using the same data and the same models, two different conclusions can be obtained.

We will also disccuss legal aspects.

This idea of “determining actuarial factor” has been remove in Europe, but we can still find it in Québec

I can also mention some recent projects, in Colorado, where insurers are asked  to predict race and ethnicity (that specific topic is on our agenda for the summer)

And finally, I should stress that discrimination has not much to do with the intention of the statistician. This is the idea of indirect discrimination

I should also mention “redlining“. About 100 years ago, in the US, we started to see maps, created by HOLC (based on City Survey Files, 1935-1940). Those maps contained “red” areas and “green” areas. Bankers were supposed to avoid the red areas, because they were considered too risky.

As a sidenote, we see nowadays some blue-lining related to climate risks,

“Blue-lining,” from the consumer’s perspective, is when banks or mortgage lenders draw lines of risk around certain streets or neighborhoods, often without clear disclosure.

Finally, I just want to recall that algorithms just tend to reproduce what can be observed in data. If there is a difference between men and women, they will reproduce it.

A bit more on insurance

I should also stress an important problem (that could be related to a paper we wrote, in French, a few years ago). Classically, when modeling categorical variables, such as a binary variable y\in\{0,1\}, practitionners just care about getting the good category. On the left, we have pictures of cats and dogs to train a model, then we try on a new picture, that is either a cat or a dog. Somehow, there is a ground truth and it is possible to see if we are right or wrong. Same if we want to detect a disease on medical pictures. Now, if we move to the right. In the middle, we have a model that predicts if it will rain, or not. But here, maybe, what we care about is actually the probability to have rain. On the right, we have the actuarial problem of modeling claims frequencies. We do not want to predict who will claim a loss, but we want a good estimator of the probability to claim a loss. The challenge, clearly, is that we cannot observe that one. We cannot observe the latent risk factor. We only observe if people got an accident or not. But some people with a very small probability can still claim a loss. And very bad drivers can actually be very lucky, and got no accident one year.

Again, in insurance, we care more about the score, the estimation of the probability than the class \widehat{y}. So we can slightly modify standard fairness definitions, to be based not on predicted classes \widehat{y}, but on the score m(\boldsymbol{x},s). As we will discuss, there are usually three general definitions of so-called “group fairness”

Quantifying unfairness with optimal transport

Let us start with demographic parity. A weak version is that, on average, scores in the two groups should identical (or close). An alternative is the strong version, asking for equalities in distributions : for any set \mathcal{I}\subset[0,1], the probability that the score is in \mathcal{I} (e.g. between 40% and 60%) should be the same in the two groups.

Mathematically, we need a distance between the distributions of scores in the two groups. And a popular distance is Wasserstein distance, that is related to optimal transport.

The empirical version is perhaps easier to understand, and mapping is based on matching of individuals. xxx

As a cultural sidenote, a couple of slides to explain why it has to do with “optimal transport”, going back to Monge (1781)‘s problem. It’s all about transporting the sand, grain by grain, from the hole to the pile. Below, we have a (purely) random transport. Which is not efficient at all…

and then the optimal version (for a strictly convex cost function), he leftmost grain in the hole goes on the lefttmost part of the stack, etc.

Mitigation

For mitigation (once we have observe that there was discrimination, as discussed previously) heuristically, we want to be somewhere in between the two distributions in the two subgroups,

Being “in between” can be interpreted locally: for someone in group A, it should be between (weights are related to proportions in the two groups) the prediction, as someone in group A, and then some sort of counterfactual in the other group, namely the prediction that person would have obtained if she had been in group B, based on the same probability level,

For the other group it is the opposite

Beyond demographic parity

If we get back to our COMPAS examples, demographic parity, in the standard classification-based definition, would be translated as

If we get back to the original motivation we gave, it had nothing to do with demographic parity, the first slide had to do with separation, or equalized odds, while the second one had to do with sufficiency, or calibration.

More generally, if we consider a weak version of the independence criterias, we have moments equality, within each protected subgroup,

Let us mention a bit more calibration. Calibration is deeply related to the interpretation of “probabilities” as returned by models as “real probabilities”. In machine learning, it is hard to define properly what those “probabilities” are.

Calibration is related to the following idea, discussed above: if we consider all cases where the predicted probability was 40% (or say, close to 40%), then the proportion on 1’s should be close to 40%.

To conclude that disgression, I can mention the following example highlighting that we should be concerned by probabilities returned by machine learning algorithms. Consider some pictures, generated by some algorithm, and more precisely, some flow of pictures, from a woman to a man

Below, we can see probabilities given by some online appplication, that returns probabilities to be a woman, given a picture. Can’t we agree that it is surprising that those probabilities (of beeing a woman) do not decrease continuous, from the picture in the top left corner and the one in the bottom right one ?

Finally, I can also mention “individual fairness”, or “counterfactual fairness”. Here also, optimal transport can be used, to quantify counterfactual unfairness. But I won’t be too long here.

Finally, an opening for next year’s agenda, with interpretability. Interpretability is a very important issue in actuarial science, which is not as objective as people might think, and the popular

let the data speak for itself

In insurance, interpretation is very important, probably more important than model assumptions

Interpretation become a key concept when dealing with multiple sensitive attributes

To conclude, just a final reminder that dealing with mitigation is a complex philosophical problem….

Tomorrow, we will discuss further at our workshop, in Québec city

Workshop on fairness and discrimination in insurance (registration is open)

Almost two years ago, on May 13th 2022, we organized a Workshop on fairness and discrimination in insurance, JEDA’22, at Laval University (in Québec city), with Marie-Pier Côté.

It was a beautiful sucess, with a lot of persons in person, for one of the first event post-pandemic. The second workshop (JEDA’24) will be organized in less than a month, on May 16th.

Registrations are open ! We will have in the room Fei Huang (UNSW Sidney), David Schraub (Chicago Actuarial Association), Marie-Ève Lainez, Autorité des marchés financiers), Laurence Barry (Chaire PARI), Agathe Fernandes Machado (UQÀM) Mallika Bender (Casualty Actuarial Society), Christopher Cooney (TD Insurance) and Olivier Côté (Université Laval).

Online Seminar Finance & Modeling, Centre d’Économie de la Sorbonne

In a week,  I will give a talk at the Modélisation Financière seminar (“Online Seminar Finance & Modeling” according to the invitation) on Using optimal transport to mitigate unfair predictions. Slides are now on line.

The insurance industry is heavily reliant on predictions of risks based on characteristics of potential customers. Although the use of said models is common, researchers have long pointed out that such practices perpetuate discrimination based on sensitive features such as gender or race. Given that such discrimination can often be attributed to historical data biases, an elimination or at least mitigation is desirable. With the shift from more traditional models to machine-learning based predictions, calls for greater mitigation have grown anew, as simply excluding sensitive variables in the pricing process can be shown to be ineffective. In this talk, we first investigate why predictions are a necessity within the industry and why correcting biases is not as straightforward as simply identifying a sensitive variable. We then propose to ease the biases through the use of Wasserstein barycenters instead of simple scaling. To demonstrate the effects and effectiveness of the approach we employ it on real data and discuss its implications. The talk will be based on recent work with François Hu and Philipp Ratz (2310.20508, 2309.06627, 2306.12912 and 2306.10155).

Fairness and discrimination, PhD Course, #8 Individual fairness

After our post on “group fairness“, it’s time to discuss so-called “individual fairness“.

Similarity

The first idea is discussed in Dwork et al. (2012)

our approach is centered around the notion of a task-specific similarity metric describing the extent to which pairs of individuals should be regarded as similar for the classification task at hand. The similarity metric expresses ground truth. When ground truth is unavailable, the metric may reflect the “best” available approximation as agreed upon by society. Following established tradition – Rawls (1971) – the metric is assumed to be public and open to discussion and continual refinement. Indeed, we envision that, typically, the distance metric would be externally imposed, for example, by a regulatory body or externally proposed by a civil rights organization

or

Counterfactual fairness

The second one is related to causal inference. Ensuring fairness using causal methods will produce “counterfactual fairness” (to use the term introduced in Kusner et al. (2017)), based on the idea a decision is fair towards an individual if the outcome is the same in reality as it would be in a ‘counterfactual’ world, in which the individual belongs to the other group (with respect to the sensitive attribute).

Quite naturally, we should compare potential outcomes, either globally (average treatement effect) or a local version, conditional on characteristics \boldsymbol{x} of an individual.

Based on causal graphs (discussed previously) we can define several notions of individual fairness.

Hence, it is possible to use Plečko et al. (2021), based on transport, and quantile regressions,

To illustrate, we can consider some causal graph on our toy dataset

and then, on some specific individuals in the dataset

Here, we can also get a counterfactual version of all individuals with one-to-one matching, and optimal transport

i.e.

and we can get a counterfactual version, and possibly, a different prediction, using the fairadapt R package

We can also consider the German credit dataset

or the causal graph used in Watson et al. (2021),

Then, those techniques can be used to see compare the predictions of 6 fictious individuals,

TD General Insurance Pricing Seminar

Tomorrow, I will give a talk at TD General Insurance Pricing Seminar, on fairness and ethics in insurance. Slides are now online.

After a very general (and long) introduction, to motivate our recent work on discrimination, I will try to explain how to quantify possible discrimination (with respect to a binary sensitive attribute), using Wasserstein distance, and optimal transport

and the use of Wasserstein Barycenter to mitigate discrimination

I will also mention our worshop in May, at Laval University,