Linear models are widely used in high-stakes decision-making due to their simplicity and interpretability. Yet when fairness constraints such as demographic parity are introduced, their effects on model coefficients, and thus on how predictive bias is distributed across features, remain opaque. Existing approaches on linear models often rely on strong and unrealistic assumptions, or overlook the explicit role of the sensitive attribute, limiting their practical utility for fairness assessment. We extend the work of (Chzhen and Schreuder, 2022) and (Fukuchi and Sakuma, 2023) by proposing a post-processing framework that can be applied on top of any linear model to decompose the resulting bias into direct (sensitive-attribute) and indirect (correlated-features) components. Our method analytically characterizes how demographic parity reshapes each model coefficient, including those of both sensitive and non-sensitive features. This enables a transparent, feature-level interpretation of fairness interventions and reveals how bias may persist or shift through correlated variables. Our framework requires no retraining and provides actionable insights for model auditing and mitigation. Experiments on both synthetic and real-world datasets demonstrate that our method captures fairness dynamics missed by prior work, offering a practical and interpretable tool for responsible deployment of linear models.
On sera à Singapour pour le présenter fin janvier, à AAAI 2026, 40th Annual AAAI Conference on Artificial Intelligence.
Thursday morning, I will give the fifth (and last) lecture of the 数学・数理科学グローバル特別講義6について, at Kyoto University. The course will be located at 3号館127大会議室. Lecture notes are available online.
Tuesday morning, I will give the fourth lecture of the 数学・数理科学グローバル特別講義6について, at Kyoto University. The course will be located at 3号館127大会議室. Lecture notes are available online.
Fairness metrics often lack actuarial relevance and are expressed in abstract units, obscuring real-world consequences. For actuaries to intervene, proxy effects and unfair biases must be quantified in insurance-relevant terms: dollars and people. This session will present new research from the CAS Race and Insurance Pricing series, focusing on the unique challenge of establishing fairness in actuarial pricing. We argue that actuarial fairness, solidarity, and causality form the three dimensions of fairness in insurance. These give rise to a five-point spectrum of pricing benchmarks, each reflecting distinct fairness goals and trade-offs. We quantify the monetary impact of unfairness at both the policyholder and segment levels through a large-scale Québec auto insurance case study.
Monday morning, I will give the third lecture of the 数学・数理科学グローバル特別講義6について, at Kyoto University. The course will be located at 3号館127大会議室. Lecture notes are available online.
Today, I will give the second lecture of the 数学・数理科学グローバル特別講義6について, at Kyoto University. The course will be located at 3号館127大会議室. Lecture notes are available online.
According to actuarial standards of practice, insurance pricing relies on grouping policyholders by risk to set adequate premiums. Modern predictive models, especially machine learning, excel at detecting statistical associations to differentiate risks, but they can learn spurious or undesired correlations. This raises concerns when socioeconomic or demographic factors may (intentionally or inadvertently) affect the fairness of insurance pricing.
Fairness in insurance is difficult to operationalize due to its ambiguity. Fairness metrics from the machine learning literature lack the segment-specific relevance actuaries require and are expressed in abstract units that obscure real-world consequences. For actuaries to intervene, proxy effects and unfair biases must be quantified in insurance-relevant terms: dollars and people.
In this paper, we focus on fairness in actuarial pricing. We study the situation where insurance rates should be fair with respect to a categorical (or discretized) sensitive variable, such as race or economic status, and the latter is fully observed (despite the possible privacy challenges).
We argue that actuarial fairness, solidarity, and causality form the three core dimensions of fairness in insurance pricing:
– Actuarial fairness aligns premiums with expected losses, mitigating cross-subsidies,
– Solidarity aligns premiums across protected groups, mitigating disparities,
– Causality ensures models capture only true risk factors, mitigating proxy effects.
We translate these dimensions into a five-point spectrum of premiums:
– The best-estimate premium is the most accurate predictor of losses using all available information, including the sensitive variable,
– The unaware premium is the most accurate predictor of losses using all information except the sensitive variable,
– The aware premium is the most accurate predictor of losses when controlling for the sensitive variable,
– The corrective premium is the most accurate predictor that enforces similar premium distributions across levels of the sensitive variable,
– The hyperaware premium is the most accurate approximation of the corrective premium that does not directly discriminate on the sensitive variable.
We define actuarially relevant local metrics that quantify the potential monetary impact of unfairness at the policyholder level. Proxy vulnerability is the difference between unaware and aware premiums. It locally measures how much the allowed variables pick up the signal of a missing sensitive variable We define post pricing local metrics to evaluate the fairness of any pricing structure relative to the estimated spectrum.
We partition policyholders to expose the segments in which unfair discrimination is most severe.
We integrate these components into a fairness assessment framework that partitions the policyholders, pinpoints segments most affected by unfairness, and evaluates
local metrics to diagnose unfairness and guide intervention.
We illustrate our approach with a large case study inspired by industry practice. The analysis relies a real dataset of approximately 768,000 vehicles insured in Québec
(2016–2017), covering at-fault material damage claims. We examine the fairness of a pseudo commercial price with respect to discretized credit score: low (vulnerable group) vs high. This sensitive variable measures the policyholder’s economic precariousness.
– Proxy vulnerability is both material and skewed: while most policyholders may receive a modest rebate, a vulnerable minority of them could face 15–30% overpricing if the regulation only requires that the sensitive variable be omitted,
– Our integrated framework illustrates that fairness in insurance pricing can be assessed efficiently, with minimal analyst effort. The framework provides simultaneous diagnostics from the three fairness dimensions, translates
unfairness into dollar terms at the individual level, and highlights disparities across population segments.
We provide additional information and the complete code illustrated on a comprehensive simulated data example in the online supplementary material.
Designed for routine portfolio monitoring, our toolbox delivers valuable insights whether or not the sensitive attribute is included in pricing, provided it is available for assessment. The toolbox’s scalability, across large datasets and rich covariate sets, makes fairness operationalizable for actuaries: intuitive, practical, and encompassing the three fairness dimensions.
this Fall I will give the “Global Mathematics Lecture IV” at Kyoto University, a series open to all graduate students across the university (not limited to mathematics). My talk will focus on “algorithmic discrimination in predictive models”, based on Insurance, Biases, Discrimination and Fairness (published last year), with a particular emphasis on applications in insurance, a topic especially relevant for students in the actuarial/insurance track of the MSc program in the Department of Mathematics. Looking forward to engaging discussions with the students!
From Tuesday to Friday, I will attend the 60th Actuarial Research Conference in Toronto. With Olivier and Marie-Pier Côté, we will give a series of talk on fairness and discrimination.
I will talk in an Invited Session on Artificial Intelligence in Insurance (as well as Marie-Pier Côté)
Olivier Côté will present in Session 1 – Bias in Assessing Financial Risk
With Olivier and Marie-Pier, we will present in one of the Casualty Actuarial Society Sponsored Sessions, Session 1 – A Scalable Toolbox for Exposing Indirect Discrimination in Insurance Rates
Tomorrow, we organize our workshop Confidence and Fairness: Scientific Foundations in AI and Risk, at the SCOR headquarters, in Paris. I’m going to give the keynote address for the day, presenting the work we’ve been able to carry out over the past 18 months (over the 3 years of funding), while laying the foundations for the concepts we’ll be discussing throughout the day.
9:00 – Registration
9:20 – Introduction speech
9:30 – Arthur Charpentier – “Fairness of predictive models: an application to insurance markets”
10:15 – Coffee break
10:45 – Toon Calders – “Unfair, You Say? Explain Yourself!”
11:30 – Isabel Valera – “Society-centered AI: An Integrative Perspective on Algorithmic Fairness”
12:15 – Lunch break
13:15 – Jean Michel Loubes – “Beyond fairness measures, discovering the bias in the algorithm”
14:00 – Evgeny Chzhen – “An optimization approach to post-processing for classification with system constraints”
14:45 – Michele Loi – “From Facts to Fairness: Diagnostic Models in Algorithmic Decision-Making”
15:30 – Coffee break
16:00 – Aurélie Lemmens – “Fair Active Learning for Personalized Policies”
16:45 – François Hu and Antoine Ly – “Fairness and Confidence in Insurance Markets, a Practitioners Perspective”
17:30 – Closing cocktail
This afternoon, after a short visit at ETH Zürich yesterday, I will be at the département de sciences actuarielles, at the Université de Lausanne. I will be talking about using optimal transport to mitigate unfair predictions and quantify counterfactual fairness. Slides are now online.
This article was originally written in French, and published here
“The decision cannot be racist since it was made without any information about the person’s ethnic origin. ” We’ve all heard this kind of statement at one time or another. Whether it’s about racism, ageism, or sexism. Whether it’s about human decisions, models, or algorithms. However, “ Kranzberg’s Law ”1 reminds us that technology is neither good nor bad, but it is not neutral either. Neutrality may only come at a certain price. And it may be time to revisit the major principles surrounding segmentation and fairness in insurance, to better understand what we’re talking about when we raise the issue of discrimination.
Ce midi, je participerai (en ligne) aux lundis de l’IA et de la finance, avec comme thème “mesurer et corriger les biais dans les systèmes d’IA”, dans le cadre d’un séminaire co-organisé par l’Autorité de Contrôle Prudentiel et de Résolution (ACPR/Banque de France) et Télécom Paris. Mon intervention, en ouverture, reviendra sur l’équité dans le contexte de l’assurance [les slides sont disponibles].