This afternoon, after a short visit at ETH Zürich yesterday, I will be at the département de sciences actuarielles, at the Université de Lausanne. I will be talking about using optimal transport to mitigate unfair predictions and quantify counterfactual fairness. Slides are now online.
Tag Archives: fairness
Paradoxes of segmentation and discrimination in insurance
This article was originally written in French, and published here
“The decision cannot be racist since it was made without any information about the person’s ethnic origin. ” We’ve all heard this kind of statement at one time or another. Whether it’s about racism, ageism, or sexism. Whether it’s about human decisions, models, or algorithms. However, “ Kranzberg’s Law ”1 reminds us that technology is neither good nor bad, but it is not neutral either. Neutrality may only come at a certain price. And it may be time to revisit the major principles surrounding segmentation and fairness in insurance, to better understand what we’re talking about when we raise the issue of discrimination.
Figure: Krater representing Theseus and Procrustes, and Theseus killing the Crommyon boar (source The Miriam and Ira D. Wallach Division of Art , 1862 – 1864 )
Continue reading Paradoxes of segmentation and discrimination in insurance
Mesurer et corriger les biais dans les systèmes d’IA
Ce midi, je participerai (en ligne) aux lundis de l’IA et de la finance, avec comme thème “mesurer et corriger les biais dans les systèmes d’IA”, dans le cadre d’un séminaire co-organisé par l’Autorité de Contrôle Prudentiel et de Résolution (ACPR/Banque de France) et Télécom Paris. Mon intervention, en ouverture, reviendra sur l’équité dans le contexte de l’assurance [les slides sont disponibles].
Continue reading Mesurer et corriger les biais dans les systèmes d’IA
Confidence and Fairness: Scientific Foundations in AI and Risk (mid-May in Paris)
Mid-May, we organize, with the SCOR Foundation for Science a one-day workshop on Confidence and Fairness, Scientific Foundations in AI and Risk. Registrations are now open ! The agenda will be
9:00 registration
9:20 – introduction speach
9:30 – Arthur Charpentier
10:15 coffee break
10:45 – Toon Calders
11:30 – Isabel Valera
12:15 lunch break
13:15 – Jean Michel Loubes
14:00 – Evgeny Chzhen
14:45 – Michele Loi
15:30 coffee break
16:00 – Aurélie Lemmens
16:45 – François Hu and Antoine Ly
17:30 – closing cocktail
EquiPy: Sequential Fairness using Optimal Transport in Python
Our article EquiPy: Sequential Fairness using Optimal Transport in Python, with Agathe Fernandes Machado, Suzie Grondin, François Hu and Philipp Ratz is now online. See also equilibration.github.io/equipy/ for the Python package
Algorithmic fairness has received considerable attention due to the failures of various predictive AI systems that have been found to be unfairly biased against subgroups of the population. Many approaches have been proposed to mitigate such biases in predictive systems, however, they often struggle to provide accurate estimates and transparent correction mechanisms in the case where multiple sensitive variables, such as a combination of gender and race, are involved. This paper introduces a new open source Python package, EquiPy, which provides a easy-to-use and model agnostic toolbox for efficiently achieving fairness across multiple sensitive variables. It also offers comprehensive graphic utilities to enable the user to interpret the influence of each sensitive variable within a global context. EquiPy makes use of theoretical results that allow the complexity arising from the use of multiple variables to be broken down into easier-to-solve sub-problems. We demonstrate the ease of use for both mitigation and interpretation on publicly available data derived from the US Census and provide sample code for its use.
Actuarial ethics and the future of the profession, in the European Actuary
Selection bias in insurance: why portfolio-specific fairness fails to extend market-wide
With Marie-Pier Côté and Olivier Côté, we recently upload a short note, selection bias in insurance: why portfolio-specific fairness fails to extend market-wide, now available on SSRN,
Fairness centres on people. In insurance, the scope of fairness should be the entire insured population, not solely an insurer’s clients. However, each insurance company’s portfolio represents a possibly skewed subsample. Models fit to these selection-biased data do not generalise well for the broader population of insureds. Two biases stem from portfolio composition: representation bias, when large prediction errors are made on individuals from subpopulations infrequently observed, and selection bias, when underwriting and marketing skew the portfolio away from the insured population. We examine how portfolio composition affects fair premium methodologies for mitigating direct and indirect discrimination on a protected attribute. We illustrate how unfairness mitigation based on a selection-biased portfolio does not yield a fair market from the perspective of insureds. Relying on causal inference and a portfolio composition indicator, we describe the selection mechanism and determine conditions under which each bias affects various fairness-adjusted premiums. We propose a method to recover the population-wide fairness-adjusted premiums from selection-biased data, by using a (third-party provided) unbiased estimate of the prohibited attribute distribution. We show that this approach effectively mitigates selection bias but leads to overall premiums that are not balanced. In a limiting case, we show that portfolio-specific fairness-aware premiums can lead to a market-wide unawareness strategy: portfolio composition opens the back door to proxy discrimination.
Talk at the Financial Conduct Authority, UK
This morning (Montréal time), I will give a talk for the Financial Conduct Authority in London, in the UK, on “Demystify fairness and discrimination in insurance, and avoid some pitfalls“.
What’s unique about insurance is that even statistical discrimination, which by definition is devoid of malicious intent, poses significant challenges. Because, on the one hand, policymakers would like insurers to treat their policyholders equally, without discrimination based on race, gender, age or other characteristics, even if it could make (statistical) sense to (indirectly) discriminate. On the other hand, at the core of actuaries’ activities lies discrimination, between risky and non-risky policyholders. And this risk is often statistically correlated with sensitive characteristics that regulation would like to prohibit insurers from taking into account. The analysis of possible discrimination in decision rules, whether human or algorithmic, is an old subject. Most of the concepts date back at least to the 50s, but recent developments in artificial intelligence have brought these issues back into the spotlight. Massive data facilitate statistical or proxy discrimination, and black-box algorithms do not facilitate understanding. Not to mention the various regulations that make it difficult to collect sensitive information, and ultimately test whether decisions can be discriminated against, especially indirectly.
Algorithmic fairness with optimal transport: quantifying counterfactual fairness and mitigating group fairness
This Friday, I will be in Laval University, in Québec, to give a talk at the Statlab annual day.
In this talk, we present two complementary approaches to addressing fairness in algorithmic decision-making, regarding individual and group fairness. First, we use Wasserstein barycenters to obtain (strong Demographic Parity) with one or multiple sensitive features. Our method provides a closed-form solution for the optimal, sequentially fair predictor, enabling possible interpretation of correlations between sensitive attributes. Then, we introduce a novel method that links two existing counterfactual approaches: causal graph-based adaptations (Plečko and Meinshausen, 2020) and optimal transport (De Lara et al., 2024). By extending “Knothe’s rearrangement” (Bonnotte, 2013) and “triangular transport” (Zech and Marzouk, 2022) to probabilistic graphical models, we propose a new group framework, termed sequential transport, which we apply to the problem of individual fairness. Theoretical foundations are established, followed by numerical demonstrations on synthetic and real datasets.
Slides are available online.
Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness
Our paper “Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness“, written with Agathe Fernandes Machado and Ewen Gallic, is now online
In this paper, we link two existing approaches to derive counterfactuals: adaptations based on a causal graph, as suggested in Plečko and Meinshausen (2020) and optimal transport, as in De Lara et al. (2024). We extend “Knothe’s rearrangement” Bonnotte (2013) and “triangular transport” Zech and Marzouk (2022) to probabilistic graphical models, and use this counterfactual approach, referred to as sequential transport, to discuss individual fairness. After establishing the theoretical foundations of the proposed method, we demonstrate its application through numerical experiments on both synthetic and real datasets.
Insurance, Biases, Discrimination and Fairness
Insurance, Biases, Discrimination and Fairness was published a few weeks ago. I still plan to spend some time this summer on the R package, including data and some functions…
This book offers an introduction to the technical foundations of discrimination and equity issues in insurance models, catering to undergraduates, postgraduates, and practitioners. It is a self-contained resource, accessible to those with a basic understanding of probability and statistics. Designed as both a reference guide and a means to develop fairer models, the book acknowledges the complexity and ambiguity surrounding the question of discrimination in insurance. In insurance, proposing differentiated premiums that accurately reflect policyholders’ true risk—termed “actuarial fairness” or “legitimate discrimination”—is economically and ethically motivated. However, such segmentation can appear discriminatory from a legal perspective. By intertwining real-life examples with academic models, the book incorporates diverse perspectives from philosophy, social sciences, economics, mathematics, and computer science. Although discrimination has long been a subject of inquiry in economics and philosophy, it has gained renewed prominence in the context of “big data,” with an abundance of proxy variables capturing sensitive attributes, and “artificial intelligence” or specifically “machine learning” techniques, which often involve less interpretable black box algorithms.
The book distinguishes between models and data to enhance our comprehension of why a model may appear unfair. It reminds us that while a model may not be inherently good or bad, it is never neutral and often represents a formalization of a world seen through potentially biased data. Furthermore, the book equips actuaries with technical tools to quantify and mitigate potential discrimination, featuring dedicated chapters that investigate into these methods.
Warsaw Actuarial School (September 2024)
In September, I will be in Warsaw, in Poland, to give a course on Insurance, Biases, Discrimination and Fairness (based on the textbook recently published). More to come after the summer break
SCOR Foundation – Scope and limits of Artificial intelligence
On May 15, 2024, the SCOR Foundation for Science hosted a webinar titled “Scope and limits of Artificial intelligence”, delivered by Arthur Charpentier. A professor in the Department of Mathematics at the University of Quebec in Montreal and a member of the Institute of Actuaries, Arthur Charpentier is an internationally recognized expert in actuarial science and the author of numerous academic articles published in the best actuarial academic journals worldwide.
During the webinar, Arthur Charpentier discussed the research project “Fairness of predictive models: an application to insurance markets”, which is supported by the SCOR Foundation for Science. This project addresses biases within the automatic artificial intelligence algorithms utilized to determine optimal pricing in individual policies. Its aim is to mitigate or eliminate such biases, which could lead to inequities or discriminatory practices based on factors such as gender, race, religion, or origin in the coverage provided by insurers or reinsurers to policyholders.
Quantifying Fairness and Discrimination in Predictive Models
The article Quantifying Fairness and Discrimination in Predictive Models was just published in Machine Learning for Econometrics and Related Topics, Springer.
The analysis of discrimination has long interested economists and lawyers. In recent years, the literature in computer science and machine learning has become interested in the subject, offering an interesting re-reading of the topic. These questions are the consequences of numerous criticisms of algorithms used to translate texts or to identify people in images. With the arrival of massive data, and the use of increasingly opaque algorithms, it is not surprising to have discriminatory algorithms, because it has become easy to have a proxy of a sensitive variable, by enriching the data indefinitely. According to [69], “technology is neither good nor bad, nor is it neutral”, and therefore, “machine learning won’t give you anything like gender neutrality ‘for free’ that you didn’t explicitely ask for”, as claimed by [61]. In this article, we will come back to the general context, for predictive models in classification. We will present the main concepts of fairness, called group fairness, based on independence between the sensitive variable and the prediction, possibly conditioned on this or that information. We will finish by going further, by presenting the concepts of individual fairness. Finally, we will see how to correct a potential discrimination, in order to guarantee that a model is more ethical.
Fresh from the oven…
14 litres d’encre de chine, 30 pinceaux, 62 crayons à mine grasse, 1 crayon à mine dure, 27 gommes à effacer, 38 kilos de papier, 16 rubans de machine à écrire, 2 machines à écrire, 67 litres de bière ont été nécessaires à la réalisation de cette aventure…
(Goscinny and Uderzo (1965*), Astérix et Cléopâtre)
Almost better than hot, freshly baked bagels…
the textbook Insurance, Biases, Discrimination and Fairness is now out, and just arrived today ! Even though I’ve spent so much time re-reading it, getting nauseous, checking references, quotes, reworking graphics, re-launching codes, etc., it’s still an immense feeling of pride to open your book for the very first time.
* Astérix et Cléopâtre est le dernier Astérix de la fameuse Collection Pilote, comme me le rappelait Michel Bera (professeur émérite du CNAM, rattaché à la Chaire de modélisation statistique du risque, et mémoire de la bande dessinée francophone, le “B” du fameux “BDM”, Trésors de la bande dessinée). “Lorsque la collection Pilote a basculé en éditions avec les titres des seuls Asterix dans le menhir, je pense que la phrase a disparu“… C’était la version qui était chez mes grands parents, et que je (re)dévorais, tous les ans, quand j’étais petit.