Pour compléter mon exposé auprès de l’Association des Masters d’Actuariat, je vais poster un billet, qui reprend les slides de mon intervention, avec un peu du discours qui va avec. Tout d’abord, mille excuses, mais les slides sont en anglais. Et ce que je raconte est très librement basé sur des travaux récents qu’on a pu mener, avec pas mal de monde, dont Laurence Barry (de la chaire PARI), Marie-Pier Côté (professeure à l’université Laval, à Québec), Olivier Côté (doctorant que je co-supervise avec Marie-Pier), Agathe Fernandes Machado (doctorante à l’UQAM), Ewen Gallic, (maître de conférences, à Marseille), François Hu (ancien stagiaire postdoctoral et aujourd’hui consultant en France), Philipp Ratz (ancien stagiaire postdoctoral et aujourd’hui consultant en Suisse), ainsi qu’Ana Patron Pinerez (ancienne stagiaire qui venait de Colombie) et Mulah Moriah (doctorant en France, que j’ai suivi pendant son mémoire de fin de maîtrise).
Category Archives: Seminar
Sequential fairness with multiple attributes, talk at the L^P seminar
François presented our joint paper “A Sequentially Fair Mechanism for Multiple Sensitive Attributes“, written with Philipp Ratz, at the L^P seminar(ISFA-CNAM-HEC Lausanne) today
Thanks François ! (more to come soon)
Talk at the Financial Conduct Authority, UK
This morning (Montréal time), I will give a talk for the Financial Conduct Authority in London, in the UK, on “Demystify fairness and discrimination in insurance, and avoid some pitfalls“.
What’s unique about insurance is that even statistical discrimination, which by definition is devoid of malicious intent, poses significant challenges. Because, on the one hand, policymakers would like insurers to treat their policyholders equally, without discrimination based on race, gender, age or other characteristics, even if it could make (statistical) sense to (indirectly) discriminate. On the other hand, at the core of actuaries’ activities lies discrimination, between risky and non-risky policyholders. And this risk is often statistically correlated with sensitive characteristics that regulation would like to prohibit insurers from taking into account. The analysis of possible discrimination in decision rules, whether human or algorithmic, is an old subject. Most of the concepts date back at least to the 50s, but recent developments in artificial intelligence have brought these issues back into the spotlight. Massive data facilitate statistical or proxy discrimination, and black-box algorithms do not facilitate understanding. Not to mention the various regulations that make it difficult to collect sensitive information, and ultimately test whether decisions can be discriminated against, especially indirectly.
Perspective causale sur les discriminations directes et indirectes liées aux caractéristiques sensibles dans les modèles prédictifs en assurance
Olivier Côté donnera un exposé “Perspective causale sur les discriminations directes et indirectes liées aux caractéristiques sensibles dans les modèles prédictifs en assurance” au séminaire étudiant de l’IID intitulé modèles causaux et inférence causale en médecine et assurance, cet après midi
L’équité envers les assurés est un enjeu central pour le secteur de l’assurance. Les assureurs reposent largement sur les données personnelles pour plusieurs décisions automatisées. Dans ce travail, nous explorons la tarification équitable en assurance à travers une perspective causale, en mettant l’accent sur les discriminations directes et indirectes liées aux variables sensibles définies et observées. Nous analysons les mécanismes par lesquels ces variables influencent la tarification et proposons une catégorisation des méthodologies équitables en cinq familles, chacune présentant des propriétés claires en matière d’équité. Le raisonnement causal permet de représenter les biais, de clarifier les hypothèses concernant le mécanisme générateur de données, et de distinguer le signal de chaque variable par rapport aux variables sensibles.
Les outils développés guideront les spécialistes du secteur de l’assurance vers de meilleures pratiques en équité algorithmique.
Algorithmic fairness with optimal transport: quantifying counterfactual fairness and mitigating group fairness
This Friday, I will be in Laval University, in Québec, to give a talk at the Statlab annual day.
In this talk, we present two complementary approaches to addressing fairness in algorithmic decision-making, regarding individual and group fairness. First, we use Wasserstein barycenters to obtain (strong Demographic Parity) with one or multiple sensitive features. Our method provides a closed-form solution for the optimal, sequentially fair predictor, enabling possible interpretation of correlations between sensitive attributes. Then, we introduce a novel method that links two existing counterfactual approaches: causal graph-based adaptations (Plečko and Meinshausen, 2020) and optimal transport (De Lara et al., 2024). By extending “Knothe’s rearrangement” (Bonnotte, 2013) and “triangular transport” (Zech and Marzouk, 2022) to probabilistic graphical models, we propose a new group framework, termed sequential transport, which we apply to the problem of individual fairness. Theoretical foundations are established, followed by numerical demonstrations on synthetic and real datasets.
Slides are available online.
“Mathematical Foundations of AI” day at the Sorbonne center for artificial intelligence
On Thursday 12th, I will attend the Mathematical Foundations of AI day, organized by the DATAIA Institute and SCAI (Sorbonne Center for Artificial Intelligence), in association with several scientific societies (namely, the Fondation Mathématique Jacques Hadamard (FMJH), the Fondation Sciences Mathématiques de Paris-FSMP, the MALIA group of the Société Française de Statistique and the Société Savante Francophone d’Apprentissage Machine (SSFAM)).
Slides are now online.
In this talk, we present two complementary approaches to addressing fairness in algorithmic decision-making through the lens of counterfactual reasoning and optimal transport, both in individual and group fairness. First, we introduce a novel method that links two existing counterfactual approaches: causal graph-based adaptations (Plečko and Meinshausen, 2020) and optimal transport (De Lara et al., 2024). By extending “Knothe’s rearrangement” (Bonnotte, 2013) and “triangular transport” (Zech and Marzouk, 2022) to probabilistic graphical models, we propose a new group framework, termed sequential transport, which we apply to the problem of individual fairness. Theoretical foundations are established, followed by numerical demonstrations on synthetic and real datasets. Building on this, we extend the discussion to algorithmic fairness in the presence of multiple sensitive attributes. While traditional fairness frameworks focus on eliminating bias with respect to a single sensitive variable, their effectiveness diminishes with multiple sensitive characteristics. To address this, we propose a sequential fairness framework based on multi-marginal Wasserstein barycenters, generalizing Strong Demographic Parity to handle multiple sensitive features. Our method provides a closed-form solution for the optimal, sequentially fair predictor, enabling interpretation of correlations between sensitive attributes. Furthermore, we introduce an approximate fairness framework that balances risk and unfairness, allowing for prioritization of fairness across specific attributes. Both approaches are supported by comprehensive numerical experiments on synthetic and real-world datasets, showcasing the practical efficacy of these methods in promoting fair decision-making. Together, they provide a robust framework for addressing fairness in complex, multi-attribute settings while preserving interpretability and flexibility.
References are given below
- we will discuss further counterfactual fairness, initiated in Optimal Transport for Counterfactual Estimation: A Method for Causal Inference, and the more recent paper Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness
- we will discuss Wasserstein barycenters with multiple sensistive attributes, A Sequentially Fair Mechanism for Multiple Sensitive Attributes
[added on Sept 13th] Thanks (on the right) for this nice picture with .
Seminario de Matemáticas Aplicadas
Mañana jueves, Agathe hará una presentación a distancia en el seminario de matemáticas aplicadas de Quantil sobre la equidad algorítmica, y más concretamente sobre el uso del transporte óptimo para obtener modelos predictivos más justos en los seguros.
Gracias a Ana Patrón Piñerez por ponernos en contacto.
Exposé et atelier au Centre de recherche sur l’intelligence2 en gestion de systèmes complexes (CRI2GS)
Jeudi, je vais donner un exposé (et animer un atelier) au centre de recherche sur l’intelligence2 en gestion de systèmes complexes (CRI2GS), à l’UQAM, que je rejoins tout juste.
J’ai mis quelques slides en ligne.
Talk in København
Tomorrow, I will give a talk at København Universiteit, on Using optimal transport to quantify and mitigate unfair insurance predictions. It is based on recent work with François Hu and Philipp Ratz (2310.20508, 2309.06627, 2306.12912 and 2306.10155).
The insurance industry is heavily reliant on predictions of risks based on characteristics of potential customers. Although the use of said models is common, researchers have long pointed out that such practices perpetuate discrimination based on sensitive features such as gender or race. Given that such discrimination can often be attributed to historical data biases, an elimination or at least mitigation is desirable. With the shift from more traditional models to machine-learning based predictions, calls for greater mitigation have grown anew, as simply excluding sensitive variables in the pricing process can be shown to be ineffective. In this talk, we first investigate why predictions are a necessity within the industry and why correcting biases is not as straightforward as simply identifying a sensitive variable. We then propose to ease the biases through the use of Wasserstein barycenters instead of simple scaling. To demonstrate the effects and effectiveness of the approach we employ it on real data and discuss its implications. The talk will be based on a recent textbook (978-3-031-49782-7) as well as work with François Hu and Philipp Ratz (2310.20508, 2309.06627, 2306.12912 and 2306.10155).
From Contemplative to Predictive Modeling
As mentioned yesterday, I gave a talk, this afternoon, entitled From Contemplative to Predictive Modeling (in actuarial science and risk management). Slides are available online, but maybe I can take some time to explain what I talked about…
It is usually claimed that actuaries build ‘predictive models’ but most of the time, what they consider would be simply ‘contemplative modeling’, in the sense that they use past information and hope that the future will be more or less the same (corresponding to the idea of generalization in machine learning). In the context of climate change (but also when modeling insurance market competition) it is not the case, data used to train models do not have the same distribution as the one we will have in the future.
Talk in Leuven, Belgium
Tomorrow, I will be (back*) at KU Leuven for a talk entitled From Contemplative to Predictive Modeling (in actuarial science and risk management). Slides are available online. But so far, I enjoy a short sunny break in Brussels…
SCOR Foundation – Scope and limits of Artificial intelligence
On May 15, 2024, the SCOR Foundation for Science hosted a webinar titled “Scope and limits of Artificial intelligence”, delivered by Arthur Charpentier. A professor in the Department of Mathematics at the University of Quebec in Montreal and a member of the Institute of Actuaries, Arthur Charpentier is an internationally recognized expert in actuarial science and the author of numerous academic articles published in the best actuarial academic journals worldwide.
During the webinar, Arthur Charpentier discussed the research project “Fairness of predictive models: an application to insurance markets”, which is supported by the SCOR Foundation for Science. This project addresses biases within the automatic artificial intelligence algorithms utilized to determine optimal pricing in individual policies. Its aim is to mitigate or eliminate such biases, which could lead to inequities or discriminatory practices based on factors such as gender, race, religion, or origin in the coverage provided by insurers or reinsurers to policyholders.
Trip in (Northern) Europe
The next two weeks, in will be in (Northern) Europe, with a first stop in Brussels (to visit colleagues), then in Leuven (I will give a talk on Monday at KU Leuven), then in København (I will give a talk on Friday at Københavns Universitet), and finally in Stockholm (at Stockholm University, for the Insurance Data Science conference).
In the Fall, I will be in Europe, with Lisbon (European Actuarial Journal conference), in France (Cerisy Colloques) and in Warsaw in Poland. In Poland, I will give a two day cours on Insurance, Biases, Discrimination and Fairness…
More to come soon…
“Scope and limits of artificial intelligence” at the SCOR foundation monthly webinar
This morning, I will give a talk on “scope and limits of artificial intelligence” at the SCOR foundation monthly webinar. As discussed previsously, we currently have ongoing research on discrimination and fairness founded by the fondation (newsletter #1 is online).
Insurance (and further motivations)
Since we will talk about fairness, I will start with a couple of motivations. The first one is about COMPAS,
Interestingly, we have the data to analyse that one. In the original analysis, conditional on non-re-offending, proportions of being wrongly classified in the two protected groups are significantly different, so the algorithm is racist,
The answer was that actually, conditional on being classified as high risk, the probability of re-offense in the two protected groups are significantly similar, so the algorithm is not racist,
So clearly, we can start to see that it will not be so easy, since using the same data and the same models, two different conclusions can be obtained.
We will also disccuss legal aspects.
This idea of “determining actuarial factor” has been remove in Europe, but we can still find it in Québec
I can also mention some recent projects, in Colorado, where insurers are asked to predict race and ethnicity (that specific topic is on our agenda for the summer)
And finally, I should stress that discrimination has not much to do with the intention of the statistician. This is the idea of indirect discrimination
I should also mention “redlining“. About 100 years ago, in the US, we started to see maps, created by HOLC (based on City Survey Files, 1935-1940). Those maps contained “red” areas and “green” areas. Bankers were supposed to avoid the red areas, because they were considered too risky.
As a sidenote, we see nowadays some blue-lining related to climate risks,
“Blue-lining,” from the consumer’s perspective, is when banks or mortgage lenders draw lines of risk around certain streets or neighborhoods, often without clear disclosure.
Finally, I just want to recall that algorithms just tend to reproduce what can be observed in data. If there is a difference between men and women, they will reproduce it.
A bit more on insurance
I should also stress an important problem (that could be related to a paper we wrote, in French, a few years ago). Classically, when modeling categorical variables, such as a binary variable y\in\{0,1\}, practitionners just care about getting the good category. On the left, we have pictures of cats and dogs to train a model, then we try on a new picture, that is either a cat or a dog. Somehow, there is a ground truth and it is possible to see if we are right or wrong. Same if we want to detect a disease on medical pictures. Now, if we move to the right. In the middle, we have a model that predicts if it will rain, or not. But here, maybe, what we care about is actually the probability to have rain. On the right, we have the actuarial problem of modeling claims frequencies. We do not want to predict who will claim a loss, but we want a good estimator of the probability to claim a loss. The challenge, clearly, is that we cannot observe that one. We cannot observe the latent risk factor. We only observe if people got an accident or not. But some people with a very small probability can still claim a loss. And very bad drivers can actually be very lucky, and got no accident one year.
Again, in insurance, we care more about the score, the estimation of the probability than the class \widehat{y}. So we can slightly modify standard fairness definitions, to be based not on predicted classes \widehat{y}, but on the score m(\boldsymbol{x},s). As we will discuss, there are usually three general definitions of so-called “group fairness”
Quantifying unfairness with optimal transport
Let us start with demographic parity. A weak version is that, on average, scores in the two groups should identical (or close). An alternative is the strong version, asking for equalities in distributions : for any set \mathcal{I}\subset[0,1], the probability that the score is in \mathcal{I} (e.g. between 40% and 60%) should be the same in the two groups.
Mathematically, we need a distance between the distributions of scores in the two groups. And a popular distance is Wasserstein distance, that is related to optimal transport.
The empirical version is perhaps easier to understand, and mapping is based on matching of individuals. xxx
As a cultural sidenote, a couple of slides to explain why it has to do with “optimal transport”, going back to Monge (1781)‘s problem. It’s all about transporting the sand, grain by grain, from the hole to the pile. Below, we have a (purely) random transport. Which is not efficient at all…
and then the optimal version (for a strictly convex cost function), he leftmost grain in the hole goes on the lefttmost part of the stack, etc.
Mitigation
For mitigation (once we have observe that there was discrimination, as discussed previously) heuristically, we want to be somewhere in between the two distributions in the two subgroups,
Being “in between” can be interpreted locally: for someone in group A, it should be between (weights are related to proportions in the two groups) the prediction, as someone in group A, and then some sort of counterfactual in the other group, namely the prediction that person would have obtained if she had been in group B, based on the same probability level,
For the other group it is the opposite
Beyond demographic parity
If we get back to our COMPAS examples, demographic parity, in the standard classification-based definition, would be translated as
If we get back to the original motivation we gave, it had nothing to do with demographic parity, the first slide had to do with separation, or equalized odds, while the second one had to do with sufficiency, or calibration.
More generally, if we consider a weak version of the independence criterias, we have moments equality, within each protected subgroup,
Let us mention a bit more calibration. Calibration is deeply related to the interpretation of “probabilities” as returned by models as “real probabilities”. In machine learning, it is hard to define properly what those “probabilities” are.
Calibration is related to the following idea, discussed above: if we consider all cases where the predicted probability was 40% (or say, close to 40%), then the proportion on 1’s should be close to 40%.
To conclude that disgression, I can mention the following example highlighting that we should be concerned by probabilities returned by machine learning algorithms. Consider some pictures, generated by some algorithm, and more precisely, some flow of pictures, from a woman to a man
Below, we can see probabilities given by some online appplication, that returns probabilities to be a woman, given a picture. Can’t we agree that it is surprising that those probabilities (of beeing a woman) do not decrease continuous, from the picture in the top left corner and the one in the bottom right one ?
Finally, I can also mention “individual fairness”, or “counterfactual fairness”. Here also, optimal transport can be used, to quantify counterfactual unfairness. But I won’t be too long here.
Finally, an opening for next year’s agenda, with interpretability. Interpretability is a very important issue in actuarial science, which is not as objective as people might think, and the popular
let the data speak for itself
In insurance, interpretation is very important, probably more important than model assumptions
Interpretation become a key concept when dealing with multiple sensitive attributes
To conclude, just a final reminder that dealing with mitigation is a complex philosophical problem….
Tomorrow, we will discuss further at our workshop, in Québec city
Brief talk on non-diversification of extreme risks, for France Stratégie
Tomorrow, I was invited to give a (brief) talk at our working group, at France Stratégies, on (non) diversification of extreme risks. Slides are online, and results are related to recent papers by Paul Embrechts and Ruodu Wang. More precisely, here are some references
But first, before discussing large risks, I need to get back (quickly) on the Pareto distribution,
To visualize Pareto tails, one can consider the Pareto plot. If points are on a straight line with (negative) slope \alpha, then observations are Pareto distributed, with tail index precisely \alpha. Depending on the slope (compared with -1), risks have either finite or infinite mean.
Infinite mean is not that common actually. It is hard to visualize what it means, actually because for any (finite) n, the empirical average \displaystyle{\overline{x}=\frac{1}{n}\sum_{i=1}^nx_i} always exists. To visualize what’s going on, we can plot the ratio \max\{x_i\} over the sum. That could be related to the concept of “top share” in inequality.
On the left, risks with finite variance (and of course finite mean). In the middle, infinite variance by finite mean. After a while, it is quite rare to have the maximum weighting for more than 1% in the total sum. With infinite mean, on the right (not too far from the limit, since \alpha is here 0.95 – finite mean means that \alpha exceeds one).
Now, if we get back to risks and insurance, recall basic things on stochastic dominance,
Then we have the following results (that is actually the most important slide)
I did include a slide with the mathematical proof (that is quite lovely actually, and straightforward)