Cooperative game theory methods, notably Shapley values, have significantly enhanced machine learning (ML) interpretability. However, existing explainable AI (XAI) frameworks mainly attribute average model predictions, overlooking predictive uncertainty. This work addresses that gap by proposing a novel, model-agnostic uncertainty attribution (UA) method grounded in conformal prediction (CP). By defining cooperative games where CP interval properties-such as width and bounds-serve as value functions, we systematically attribute predictive uncertainty to input features. Extending beyond the traditional Shapley values, we use the richer class of Harsanyi allocations, and in particular the proportional Shapley values, which distribute attribution proportionally to feature importance. We propose a Monte Carlo approximation method with robust statistical guarantees to address computational feasibility, significantly improving runtime efficiency. Our comprehensive experiments on synthetic benchmarks and real-world datasets demonstrate the practical utility and interpretative depth of our approach. By combining cooperative game theory and conformal prediction, we offer a rigorous, flexible toolkit for understanding and communicating predictive uncertainty in high-stakes ML applications.
Ce midi, je donne un exposé en visio, sur le thème de l’équité et de la discrimination en assurance, en lien avec les questions d’interprétabilité et d’explicabilité, pour l’Institut des Actuaires, en France. Les slides sont en ligne ici…
Note la vidéo de l’exposé est en ligne sur Youtube.
Tomorrow, we organize our workshop Confidence and Fairness: Scientific Foundations in AI and Risk, at the SCOR headquarters, in Paris. I’m going to give the keynote address for the day, presenting the work we’ve been able to carry out over the past 18 months (over the 3 years of funding), while laying the foundations for the concepts we’ll be discussing throughout the day.
9:00 – Registration
9:20 – Introduction speech
9:30 – Arthur Charpentier – “Fairness of predictive models: an application to insurance markets”
10:15 – Coffee break
10:45 – Toon Calders – “Unfair, You Say? Explain Yourself!”
11:30 – Isabel Valera – “Society-centered AI: An Integrative Perspective on Algorithmic Fairness”
12:15 – Lunch break
13:15 – Jean Michel Loubes – “Beyond fairness measures, discovering the bias in the algorithm”
14:00 – Evgeny Chzhen – “An optimization approach to post-processing for classification with system constraints”
14:45 – Michele Loi – “From Facts to Fairness: Diagnostic Models in Algorithmic Decision-Making”
15:30 – Coffee break
16:00 – Aurélie Lemmens – “Fair Active Learning for Personalized Policies”
16:45 – François Hu and Antoine Ly – “Fairness and Confidence in Insurance Markets, a Practitioners Perspective”
17:30 – Closing cocktail
This paper designs a sequential repeated game of a micro-founded society with three types of agents: individuals, insurers, and a government. Nascent to economics literature, we use Reinforcement Learning (RL), closely related to multi-armed bandit problems, to learn the welfare impact of a set of proposed policy interventions per $1 spent on them. The paper rigorously discusses the desirability of the proposed interventions by comparing them against each other on a case-by-case basis. The paper provides a framework for algorithmic policy evaluation using calibrated theoretical models which can assist in feasibility studies.
After a few days in Switzerland, I am on my way to Luxembourg. Tonight, I will be talking at the Institut Luxembourgeois des actuaires (ILAC), on “discrimination et interprétabilité des modèles prédictifs”. Slides (in English) are available here.
This afternoon, after a short visit at ETH Zürich yesterday, I will be at the département de sciences actuarielles, at the Université de Lausanne. I will be talking about using optimal transport to mitigate unfair predictions and quantify counterfactual fairness. Slides are now online.
Recently, optimal transport-based approaches have gained attention for deriving counterfactuals, e.g., to quantify algorithmic discrimination. However, in the general multivariate setting, these methods are often opaque and difficult to interpret. To address this, alternative methodologies have been proposed, using causal graphs combined with iterative quantile regressions (Plečko and Meinshausen (2020)) or sequential transport (Fernandes Machado et al. (2025)) to examine fairness at the individual level, often referred to as “counterfactual fairness.” Despite these advancements, transporting categorical variables remains a significant challenge in practical applications with real datasets. In this paper, we propose a novel approach to address this issue. Our method involves (1) converting categorical variables into compositional data and (2) transporting these compositions within the probabilistic simplex of \mathbb{R}^d. We demonstrate the applicability and effectiveness of this approach through an illustration on real-world data, and discuss limitations.
This article was originally written in French, and published here
“The decision cannot be racist since it was made without any information about the person’s ethnic origin. ” We’ve all heard this kind of statement at one time or another. Whether it’s about racism, ageism, or sexism. Whether it’s about human decisions, models, or algorithms. However, “ Kranzberg’s Law ”1 reminds us that technology is neither good nor bad, but it is not neutral either. Neutrality may only come at a certain price. And it may be time to revisit the major principles surrounding segmentation and fairness in insurance, to better understand what we’re talking about when we raise the issue of discrimination.
“La décision ne peut pas être raciste puisqu’elle a été prise sans aucune information sur l’origine ethnique de la personne.” Ce genre d’affirmation, on l’a tous entendu un jour ou l’autre. Que ce soit sur le racisme, l’âgisme ou le sexisme. Que ce soient des décisions humaines, des modèles ou des algorithmes. Pourtant, la “loi de Kranzberg”1 nous rappelle que la technologie n’est ni bonne ni mauvaise mais, elle n’est pas neutre pour autant. La neutralité risque de ne venir qu’à un certain prix. Et il est peut être temps de revenir sur les grands principes autour de la segmentation et de l’équité, en assurance, pour mieux comprendre de quoi on parle quand on évoque la question de la discrimination.
La loi de Kranzberg, énoncée par l’historien des technologies Melvin Kranzberg en 1986, a été formulée dans un article intitulé “Technology and History: ‘Kranzberg’s Laws'”, publié dans la revue Technology and Culture. Six lois sont proposées, mais la première est la plus connue, rappelant que les effets de la technologie dépendent du contexte social, politique, économique et culturel dans lequel elle est utilisée: “Technology is neither good nor bad; nor is it neutral.” [↩]
Ce midi, je participerai (en ligne) aux lundis de l’IA et de la finance, avec comme thème “mesurer et corriger les biais dans les systèmes d’IA”, dans le cadre d’un séminaire co-organisé par l’Autorité de Contrôle Prudentiel et de Résolution (ACPR/Banque de France) et Télécom Paris. Mon intervention, en ouverture, reviendra sur l’équité dans le contexte de l’assurance [les slides sont disponibles].