Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Confidence and Fairness: Scientific Foundations in AI and Risk, Workshop in Paris

Tomorrow, we organize our workshop Confidence and Fairness: Scientific Foundations in AI and Risk, at the SCOR headquarters, in Paris. I’m going to give the keynote address for the day, presenting the work we’ve been able to carry out over the past 18 months (over the 3 years of funding), while laying the foundations for the concepts we’ll be discussing throughout the day.


9:00 – Registration
9:20 – Introduction speech
9:30 – Arthur Charpentier – “Fairness of predictive models: an application to insurance markets
10:15 – Coffee break
10:45 – Toon Calders – “Unfair, You Say? Explain Yourself!”
11:30 – Isabel Valera – “Society-centered AI: An Integrative Perspective on Algorithmic Fairness”
12:15 – Lunch break
13:15 – Jean Michel Loubes – “Beyond fairness measures, discovering the bias in the algorithm”
14:00 – Evgeny Chzhen – “An optimization approach to post-processing for classification with system constraints”
14:45 – Michele Loi – “From Facts to Fairness: Diagnostic Models in Algorithmic Decision-Making”
15:30 – Coffee break
16:00 – Aurélie Lemmens – “Fair Active Learning for Personalized Policies”
16:45 – François Hu and Antoine Ly – “Fairness and Confidence in Insurance Markets, a Practitioners Perspective”
17:30 – Closing cocktail

Talk with CCR and chaire PARI, in Paris, on government intervention and welfare

This afternoon, I will give a brief talk on welfare and optimal policies for government intervention, at CCR, in Paris. I have some slides to present. The presentation is based on a papers we wrote a few years ago, Government Intervention in Catastrophe Insurance Markets: A Reinforcement Learning Approach

This paper designs a sequential repeated game of a micro-founded society with three types of agents: individuals, insurers, and a government. Nascent to economics literature, we use Reinforcement Learning (RL), closely related to multi-armed bandit problems, to learn the welfare impact of a set of proposed policy interventions per $1 spent on them. The paper rigorously discusses the desirability of the proposed interventions by comparing them against each other on a case-by-case basis. The paper provides a framework for algorithmic policy evaluation using calibrated theoretical models which can assist in feasibility studies.

In this paper, we used the concept of Marginal Value of Public Funds, or “MVPF, introduced in 2020 by Amy Finkelstein, Nathaniel Hendren and Ben Sprung-Keyser, in “Welfare Analysis Meets Causal Inference“, and “A Unified Welfare Analysis of Government Policies“. See some slides online, or the website https://policyinsights.org/.

Talk, Chaire ACTIONS, CNAM, Paris

Today, I will give a joint presentation with Olivier Côté at the CNAM (Conservatoire National des Arts et Métiers) in Paris, for the chaire ACTIONS. My talk will be an introduction based on our two first joint papers, “A fair price to pay: Exploiting causal graphs for fairness in insurance” that was published early that year, and “Selection Bias in Insurance: Why Portfolio-Specific Fairness Fails to Extend Market-Wide“. Slides are now available. Olivier will present more recent work, to visualize the fairness spectrum on real data from an insurance company. Slides are available.

Continue reading Talk, Chaire ACTIONS, CNAM, Paris

Talks at Milliman R&D, Paris

Tomorrow, I will attend the Milliman R&D Seminar, for a joint presentation with Olivier Côté, who will spend the week in Paris. My talk will be an introduction based on our two first joint papers, “A fair price to pay: Exploiting causal graphs for fairness in insurance” that was published early that year, and “Selection Bias in Insurance: Why Portfolio-Specific Fairness Fails to Extend Market-Wide“. Slides are now available. Olivier will present more recent work, to visualize the fairness spectrum on real data from an insurance company. Slides are available.

Optimal Transport on Categorical Data for Counterfactuals, at IJCAI’25

This summer, Agathe will present our recent paper Optimal Transport on Categorical Data for Counterfactuals using Compositional Data and Dirichlet Transport at the 34th International Joint Conference on Artificial Intelligence, IJCAI.

Recently, optimal transport-based approaches have gained attention for deriving counterfactuals, e.g., to quantify algorithmic discrimination. However, in the general multivariate setting, these methods are often opaque and difficult to interpret. To address this, alternative methodologies have been proposed, using causal graphs combined with iterative quantile regressions (Plečko and Meinshausen (2020)) or sequential transport (Fernandes Machado et al. (2025)) to examine fairness at the individual level, often referred to as “counterfactual fairness.” Despite these advancements, transporting categorical variables remains a significant challenge in practical applications with real datasets. In this paper, we propose a novel approach to address this issue. Our method involves (1) converting categorical variables into compositional data and (2) transporting these compositions within the probabilistic simplex of \mathbb{R}^d. We demonstrate the applicability and effectiveness of this approach through an illustration on real-world data, and discuss limitations.

Paradoxes of segmentation and discrimination in insurance

This article was originally written in French, and published here

The decision cannot be racist since it was made without any information about the person’s ethnic origin. ” We’ve all heard this kind of statement at one time or another. Whether it’s about racism, ageism, or sexism. Whether it’s about human decisions, models, or algorithms. However, “ Kranzberg’s Law1 reminds us that technology is neither good nor bad, but it is not neutral either. Neutrality may only come at a certain price. And it may be time to revisit the major principles surrounding segmentation and fairness in insurance, to better understand what we’re talking about when we raise the issue of discrimination.

Figure: Krater representing Theseus and Procrustes, and Theseus killing the Crommyon boar (source The Miriam and Ira D. Wallach Division of Art , 1862 – 1864 )

Continue reading Paradoxes of segmentation and discrimination in insurance

Les paradoxes de la segmentation et de la discrimination en assurance

La décision ne peut pas être raciste puisqu’elle a été prise sans aucune information sur l’origine ethnique de la personne.” Ce genre d’affirmation, on l’a tous entendu un jour ou l’autre. Que ce soit sur le racisme, l’âgisme ou le sexisme. Que ce soient des décisions humaines, des modèles ou des algorithmes. Pourtant, la “loi de Kranzberg1 nous rappelle que la technologie n’est ni bonne ni mauvaise mais, elle n’est pas neutre pour autant. La neutralité risque de ne venir qu’à un certain prix. Et il est peut être temps de revenir sur les grands principes autour de la segmentation et de l’équité, en assurance, pour mieux comprendre de quoi on parle quand on évoque la question de la discrimination.

Figure: Krater représentant Thésée et Procruste, et Thésée tuant le sanglier de Crommyon (source The Miriam and Ira D. Wallach Division of Art, 1862 – 1864)

Continue reading Les paradoxes de la segmentation et de la discrimination en assurance

  1. La loi de Kranzberg, énoncée par l’historien des technologies Melvin Kranzberg en 1986, a été formulée dans un article intitulé “Technology and History: ‘Kranzberg’s Laws'”, publié dans la revue Technology and Culture. Six lois sont proposées, mais la première est la plus connue, rappelant que les effets de la technologie dépendent du contexte social, politique, économique et culturel dans lequel elle est utilisée: “Technology is neither good nor bad; nor is it neutral.” []

Mesurer et corriger les biais dans les systèmes d’IA

Ce midi, je participerai (en ligne) aux lundis de l’IA et de la finance, avec comme thème “mesurer et corriger les biais dans les systèmes d’IA”, dans le cadre d’un séminaire co-organisé par l’Autorité de Contrôle Prudentiel et de Résolution (ACPR/Banque de France) et Télécom Paris. Mon intervention, en ouverture, reviendra sur l’équité dans le contexte de l’assurance [les slides sont disponibles].

Continue reading Mesurer et corriger les biais dans les systèmes d’IA

Confidence and Fairness: Scientific Foundations in AI and Risk (mid-May in Paris)

Mid-May, we organize, with the SCOR Foundation for Science a one-day workshop on Confidence and Fairness, Scientific Foundations in AI and Risk. Registrations are now open ! The agenda will be

9:00 registration
9:20 – introduction speach
9:30 – Arthur Charpentier
10:15 coffee break
10:45 – Toon Calders
11:30 – Isabel Valera
12:15 lunch break
13:15 – Jean Michel Loubes
14:00 – Evgeny Chzhen
14:45 – Michele Loi
15:30 coffee break
16:00 – Aurélie Lemmens
16:45 – François Hu and Antoine Ly
17:30 – closing cocktail

"sendo l'intento mio scrivere cosa utile a chi la intende…"