Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Collectively rethinking disasters

this post was written with Laurence Barry, initially in French

At a time when the frequency and intensity of natural disasters are increasing as a result of global warming, the issue of disaster insurance is becoming crucial to preserving economic and social resilience in most countries. However, when we look at the various existing systems, we see that the state is usually involved in one way or another in the system that has been put in place. Far from being limited to a commercial transaction between insurer and insured, insurance against natural risks takes the form of a common good—an “insurance commons”—based on mutualization, solidarity, and collective governance. To shed light on this issue and propose another framework for thinking about natural disaster insurance, we will draw on seminal work on hybrid property regimes and commons (Samuelson, 1954; Arrow, 1963; Ostrom, 1990; Ostrom & Ostrom, 1999), as well as economic analyses of insurance (Coase, 1960; Buchanan, 1965) and systemic risk management (Markowitz, 1952; Arrow & Lind, 1970). This theoretical positioning makes it possible to move beyond the traditional dichotomy between the public and private sectors, which has been further exacerbated by recent debates, to show how catastrophe insurance is in fact organized as a polycentric institution, where a multiplicity of actors—states, insurers, reinsurers, local authorities, and policyholders—have a role to play. A virtuous system is one in which all actors cooperate and regulate each other to ensure universal and effective protection despite climatic hazards.
Continue reading Collectively rethinking disasters

SCOR Foundation for Science Webinar, ML and Econometrics

This week, I will give a talk at the SCOR Foundation for Science webinar (slides are available online). and I was asked to give a talk on econometrics vs IA (or machine learning),

Of course, the two concepts are related, and there is a continuum between them.

As we wrote it Charpentier et al. (2017)

Econometrics and machine learning seem to have one common goal: to construct a predictive model, for a variable of interest, using explanatory variables (or features).

For the purposes of this presentation, we will begin by contrasting the two, emphasizing the differences, and then showing the connections that exist.

Long story short, in between, we have computation statistics, or statistical learning, corresponding to computational techniques with mathematical probabilistic guarantees.

Continue reading SCOR Foundation for Science Webinar, ML and Econometrics

Causalité, pour l’Institut des Actuaires, Paris

Ce mardi, je participe en visio à la conférence annuelle de l’Institut des Actuaires, à Paris, pour une session sur les modèles causaux en assurance, avec une introduction générale, avant qu’Aurélien Couloumy ne prenne la suite pour présenter des applications.

Pour ceux qui veulent un exercice pour l’été, je peux mentionner un tableau tiré de “Optimum Strategies for Creativity and Longevity

Si quelqu’un arrive à établir un lien causal, je suis intéressé.

Beyond Shapley Values: Cooperative Games for the Interpretation of Machine Learning Models

Our paper Beyond Shapley Values: Cooperative Games for the Interpretation of Machine Learning Models, with Marouane Il Idrissi and Agathe Fernandes Machado, is now online. It will be presented at the IJCAI 2025 Workshop on Explainable Artificial Intelligence (XAI), in Montréal this Summer…

Cooperative game theory has become a cornerstone of post-hoc interpretability in machine learning, largely through the use of Shapley values. Yet, despite their widespread adoption, Shapley-based methods often rest on axiomatic justifications whose relevance to feature attribution remains debatable. In this paper, we revisit cooperative game theory from an interpretability perspective and argue for a broader and more principled use of its tools. We highlight two general families of efficient allocations, the Weber and Harsanyi sets, that extend beyond Shapley values and offer richer interpretative flexibility. We present an accessible overview of these allocation schemes, clarify the distinction between value functions and aggregation rules, and introduce a three-step blueprint for constructing reliable and theoretically-grounded feature attributions. Our goal is to move beyond fixed axioms and provide the XAI community with a coherent framework to design attribution methods that are both meaningful and robust to shifting methodological trends.

Conference in Montpellier (France), on calibration

This morning, I will present at the “quatrième Journée d’Econometrie appliquée, en l’honneur de Michel Terraza”. I will present recent work with Agathe Fernandes Machado, Ewen Gallic, François Hu, and Emmanuel Flachaire. Slides are available. The talk is on “Calibration, ou interprétation probabiliste des scores de modèles boites noires” (Calibration, or probabilistic interpretationof black box model scores, but slides are in English).

Unveil Sources of Uncertainty: Feature Contribution to ConformalPrediction Intervals

With Marouane Il Idrissi, Agathe Fernandes Machado and Ewen Gallic, we recently uploaded a paper “Unveil Sources of Uncertainty: Feature Contribution to Conformal Prediction Intervals” on ArXiv.

Cooperative game theory methods, notably Shapley values, have significantly enhanced machine learning (ML) interpretability. However, existing explainable AI (XAI) frameworks mainly attribute average model predictions, overlooking predictive uncertainty. This work addresses that gap by proposing a novel, model-agnostic uncertainty attribution (UA) method grounded in conformal prediction (CP). By defining cooperative games where CP interval properties-such as width and bounds-serve as value functions, we systematically attribute predictive uncertainty to input features. Extending beyond the traditional Shapley values, we use the richer class of Harsanyi allocations, and in particular the proportional Shapley values, which distribute attribution proportionally to feature importance. We propose a Monte Carlo approximation method with robust statistical guarantees to address computational feasibility, significantly improving runtime efficiency. Our comprehensive experiments on synthetic benchmarks and real-world datasets demonstrate the practical utility and interpretative depth of our approach. By combining cooperative game theory and conformal prediction, we offer a rigorous, flexible toolkit for understanding and communicating predictive uncertainty in high-stakes ML applications.

Confidence and Fairness: Scientific Foundations in AI and Risk, Workshop in Paris

Tomorrow, we organize our workshop Confidence and Fairness: Scientific Foundations in AI and Risk, at the SCOR headquarters, in Paris. I’m going to give the keynote address for the day, presenting the work we’ve been able to carry out over the past 18 months (over the 3 years of funding), while laying the foundations for the concepts we’ll be discussing throughout the day.


9:00 – Registration
9:20 – Introduction speech
9:30 – Arthur Charpentier – “Fairness of predictive models: an application to insurance markets
10:15 – Coffee break
10:45 – Toon Calders – “Unfair, You Say? Explain Yourself!”
11:30 – Isabel Valera – “Society-centered AI: An Integrative Perspective on Algorithmic Fairness”
12:15 – Lunch break
13:15 – Jean Michel Loubes – “Beyond fairness measures, discovering the bias in the algorithm”
14:00 – Evgeny Chzhen – “An optimization approach to post-processing for classification with system constraints”
14:45 – Michele Loi – “From Facts to Fairness: Diagnostic Models in Algorithmic Decision-Making”
15:30 – Coffee break
16:00 – Aurélie Lemmens – “Fair Active Learning for Personalized Policies”
16:45 – François Hu and Antoine Ly – “Fairness and Confidence in Insurance Markets, a Practitioners Perspective”
17:30 – Closing cocktail

Talk with CCR and chaire PARI, in Paris, on government intervention and welfare

This afternoon, I will give a brief talk on welfare and optimal policies for government intervention, at CCR, in Paris. I have some slides to present. The presentation is based on a papers we wrote a few years ago, Government Intervention in Catastrophe Insurance Markets: A Reinforcement Learning Approach

This paper designs a sequential repeated game of a micro-founded society with three types of agents: individuals, insurers, and a government. Nascent to economics literature, we use Reinforcement Learning (RL), closely related to multi-armed bandit problems, to learn the welfare impact of a set of proposed policy interventions per $1 spent on them. The paper rigorously discusses the desirability of the proposed interventions by comparing them against each other on a case-by-case basis. The paper provides a framework for algorithmic policy evaluation using calibrated theoretical models which can assist in feasibility studies.

In this paper, we used the concept of Marginal Value of Public Funds, or “MVPF, introduced in 2020 by Amy Finkelstein, Nathaniel Hendren and Ben Sprung-Keyser, in “Welfare Analysis Meets Causal Inference“, and “A Unified Welfare Analysis of Government Policies“. See some slides online, or the website https://policyinsights.org/.

Talk, Chaire ACTIONS, CNAM, Paris

Today, I will give a joint presentation with Olivier Côté at the CNAM (Conservatoire National des Arts et Métiers) in Paris, for the chaire ACTIONS. My talk will be an introduction based on our two first joint papers, “A fair price to pay: Exploiting causal graphs for fairness in insurance” that was published early that year, and “Selection Bias in Insurance: Why Portfolio-Specific Fairness Fails to Extend Market-Wide“. Slides are now available. Olivier will present more recent work, to visualize the fairness spectrum on real data from an insurance company. Slides are available.

Continue reading Talk, Chaire ACTIONS, CNAM, Paris

Talks at Milliman R&D, Paris

Tomorrow, I will attend the Milliman R&D Seminar, for a joint presentation with Olivier Côté, who will spend the week in Paris. My talk will be an introduction based on our two first joint papers, “A fair price to pay: Exploiting causal graphs for fairness in insurance” that was published early that year, and “Selection Bias in Insurance: Why Portfolio-Specific Fairness Fails to Extend Market-Wide“. Slides are now available. Olivier will present more recent work, to visualize the fairness spectrum on real data from an insurance company. Slides are available.

"sendo l'intento mio scrivere cosa utile a chi la intende…"