I was invited to give a talk at the “IFoA AI Ethics Governance & Risk Management forum” at the end of the week. I uploaded some slides to start the discussion.

I was invited to give a talk at the “IFoA AI Ethics Governance & Risk Management forum” at the end of the week. I uploaded some slides to start the discussion.

This post was initially written in French, La technologie nous sauvera-t-elle ?
I feel as though I keep hearing, more and more often, that climate change is above all a problem of innovation. Emissions continue to rise, targets keep slipping out of reach, yet we fill our collective imagination with carbon-capture machines, artificial intelligences supposedly able to optimize the transition, and even technologies designed to alter the climate itself. There is nothing absurd about such confidence in itself, and technology will very likely help. But it becomes politically suspect when it serves mainly to postpone difficult questions, beginning with this one: what are we willing to change, here and now, in the way we produce, consume, and govern? The literature on “mitigation deterrence” helps us understand how the promise of a future intervention can legitimize delaying present efforts. Nor is this mechanism unique to climate change. The COVID pandemic, it seems to me, offered a strikingly similar scene, in which fascination with the biomedical response sometimes pushed into the background the social, institutional, and political tools that the strongest research nevertheless regarded as indispensable.
Continue reading Will Technology Save Us?
J’ai l’impression d’entendre de plus en plus souvent dire que le climat est avant tout un problème d’innovation. Les émissions continuent d’augmenter, les objectifs se dérobent, mais on essaye de peupler notre imaginaire collectif de machines à capturer le carbone, d’intelligences artificielles capables d’optimiser la transition, voire de techniques de modification du climat. Cette confiance n’a rien d’absurde en elle-même, et il y a fort à parier que les technologies aideront. Mais elle devient politiquement suspecte lorsqu’elle sert avant tout à repousser des questions difficiles, à commencer par “que sommes-nous prêts à changer, ici et maintenant, dans nos manières de produire, de consommer et de gouverner ?” La littérature sur la “mitigation deterrence” a permet de mieux comprendre la promesse d’une intervention future légitimant de retarder l’effort présent. Ce mécanisme n’est pas propre au climat, il me semble que la pandémie de COVID nous a offert une scène assez proche, où la fascination pour la réponse biomédicale a parfois relégué au second plan les instruments sociaux, institutionnels et politiques pourtant jugés indispensables par les travaux les plus solides .
Continue reading La technologie nous sauvera-t-elle ?
Mercredi, je donnerai la première partie de l’exposé Risque climatique, retrait des assureurs et granularité des tarifs, organisé par la Chaire PARI. Je donnerai un point de vue un peu général sur le problème qui nous préoccupe, à savoir la modélisation d’un marché concurrentiel d’assurance, et la recherche de politiques optimales, pour un régulateur, pour que l’équilibre concurrentiel soit optimal (ou a minima améliore certains critères) pour le bien être global. Raphaël Dalbarade présentera ensuite ses travaux sur le sujet.



Publication d’un court article, écrit avec Laurence Barry, en ligne sur le site de la Revue Banque,
Le dérèglement climatique, qui s’accompagne de l’intensification des phénomènes extrêmes, prend de l’ampleur à un moment où les données disponibles concernant ces événements se multiplient à une maille de plus en plus fine. De plus, dans certains pays, et notamment en France, des stress-tests climatiques mis en place ces dernières années ont contribué à une montée en capacité des compagnies d’assurance sur ces modèles.

Today is a long travel day, going from Beijing to Hong Kong (a bit more than 2000km), by train. At least it is a direct train (a night train was mentioned online, but none was proposed). As a comparison, Beijing to Hong Kong by train, that’s like Montréal to New Orleans, or Oslo to Barcelona…

Friday morning, I will give a talk at HKU (slides are online)
Next week, I will be at Tsinghua University in Beijing. On Tuesday, in the early afternoon, I will give three lectures for undergraduate students on the theme: ‘Three lectures on AI and its implications for actuarial (and/or financial) professions.’
These lectures explore the relationship between artificial intelligence and insurance. They begin from the observation that insurance has long relied on prediction, classification, and decision-making under uncertainty, well before the recent rise of AI. AI therefore does not introduce these issues from scratch, but changes their scale, granularity, and practical consequences. The lectures review the insurance foundations of pricing and pooling, then examine the main challenges raised by AI, including personalization, selection, causality, bias, fairness, governance, and trust. They finally turn to the concrete uses of AI across the insurance value chain, emphasizing that a good system should not be judged by accuracy alone, but also by its calibration, its fairness, and its ability to support real decisions in practice.
In the evening, I will give a talk at the seminar, at Renmin University of China, on the theme: ‘Using optimal transport to mitigate unfair predictions and quantify counterfactual fairness.’ The first part will revisit topics that I presented in greater detail in the lectures notes of my course this autumn at Kyoto University, particularly the price to be paid in terms of accuracy in order to achieve fairness. The second part will discuss the paper ‘Sequential Transport for Causal Mediation Analysis,’ which was posted online a few days ago.
On Wednesday, I will have in-depth academic exchange session with students from the Tsinghua Actuarial Science Association, at Tsinghua University.
Our paper, Fair regression under localized demographic parity constraints, with Christophe Denis, Romuald Elie, Mohamed Hebiri and François Hu, is now available online, on arXiv,
Demographic parity (DP) is a widely used group fairness criterion requiring predictive distributions to be invariant across sensitive groups. While natural in classification, full distributional DP is often overly restrictive in regression and can lead to substantial accuracy loss. We propose a relaxation of DP tailored to regression, enforcing parity only at a finite set of quantile levels and/or score thresholds. Concretely, we introduce a novel (\boldsymbol{\ell},\mathcal{Z})-fair predictor, which imposes groupwise CDF constraints of the form F_{f\mid S=s}(z_m)=\ell_m for prescribed pairs (\ell_m,z_m). For this setting, we derive closed-form characterizations of the optimal fair discretized predictor via a Lagrangian dual formulation and quantify the discretization cost, showing that the risk gap to the continuous optimum vanishes as the grid is refined. We further develop a model-agnostic post-processing algorithm based on two samples (labeled for learning a base regressor and unlabeled for calibration), and establish finite-sample guarantees on constraint violation and excess penalized risk. In addition, we introduce two alternative frameworks where we match group and marginal CDF values at selected score thresholds. In both settings, we provide closed-form solutions for the optimal fair discretized predictor. Experiments on synthetic and real datasets illustrate an interpretable fairness-accuracy trade-off, enabling targeted corrections at decision-relevant quantiles or thresholds while preserving predictive performance.
Pictures from Kyoto 京都, Fall and Winter…

The latest newsletter (Fall and Winter activities) related to our research project on algorithmic fairness and insurance markets is finally out Newsletter_2026_5
Get back to us if you want more details or just to share some feedbacks… Une version en français est également disponible Infolettre_2026_5
Our paper, ‘A Scalable toolbox for exposing indirect discrimination in insurance rates‘ with Olivier Côté and Marie-Pier Côté, is out.
Here is Alyssa Gambone ()’s post on Linkedin
It’s time for one of my rare insurance related posts, though this one isn’t entirely off theme of my normal content. The CAS recently published a paper entitled ‘A Scalable toolbox for exposing indirect discrimination in insurance rates‘ by Olivier Côté, Marie-Pier Côté, and Arthur Charpentier that makes an incredibly important point as the actuarial profession goes deeper and deeper into machine learning and AI. “In an unrealistic extreme, oracle insurers — capable of perfectly predicting both amount and timing of insurance claims — might charge each policyholder precisely their discounted future claim amount, questioning the very concept of insurance risk transfer.” I’m a big believer that in service of the “most accurate” rates (what the paper calls “actuarial fairness”, which is incredibly damning of our profession), we have lost our purpose, which is to ensure a wide ranging ability of society to take normal risks like driving a car, owning a home, or starting a business. Society is better when insurance is available and affordable, not when it is precisely accurate for the smallest groups possible. In service of the capitalistic goal of maximizing profits at all costs, we have found out that the costs might be our industry’s societal purpose and reason to exist. “As data granularity increases, so does the potential for actuarial justification in perpetuating [historic and socioeconomic] disparities.” Shame on our profession if it does.
There will be much more work published soon on those topics… Meanwhile, here was our abstract,
J’avais parlé l’automne dernier de l’affiche “Paysage mathématique“, en ligne sur le site https://kits.math.cnrs.fr/.

Il y a eu depuis quelques “cartes postales”, extraites de cette affiche.



Licence for the poster: Romane Charpentier – Mathematical Landscape – CC-BY-SA 4.0
This post was initially written in French, Si personne ne paie pour la preuve, tout le monde paiera pour le sinistre
Let’s start with a truism. In ordinary life, just as in economic life, we have to make decisions without ever knowing everything. Every decision involves some uncertainty, and therefore some risk. Some risks are small, manageable, and we barely notice them anymore. Others can have financial consequences large enough that we would rather transfer them to a third party, by paying a premium so that an insurer will bear them for us. That is, at bottom, one of the most concrete functions of insurance. But an equally interesting question arises when that transfer becomes impossible, or at least impossible at a reasonable price. That is what we call uninsurability. We already encounter it with certain natural risks, when losses become too correlated, too massive, too difficult to mutualize, as I discussed in Insurers and AI, a systemic risk and in Insuring AI. New risks? New models?. And apologies for using this umbrella term, “AI,” which I do not like very much, but I need to simplify a little or I would never finish this post…
The day before yesterday, Thomas Claburn wrote in AI still doesn’t work very well in business, businesses are faking it, and a reckoning is coming:
Another looming problem is that large insurers have become wary of underwriting policies that cover companies against AI risk.
(Thanks to @flomaraninchi and @ugo for pointing it out to me.) I have the feeling that it is important to understand exactly what this means. If major insurers are becoming reluctant to cover AI-related uses, this is probably not just one more market anecdote, nor simply another legal precaution. It is a signal. An important signal for anyone who builds predictive models, or who is interested in uncertainty and ambiguity, because insurers do not need to be prophets to become cautious. Perhaps that is what risk culture is. It is enough for them to conclude that they do not understand the risk well enough, that they cannot observe it properly, that they cannot reconstruct the chain of responsibility behind it, or that they doubt they can carry it at a sustainable price. In other words, if insurers are stepping back, that should force us to ask whether AI is really under control. What we are dealing with here are very classical questions in the economics of information, imperfect measurement, misaligned incentives, and insufficient proof, much more than a simple dispute about the current level of the technology.
Continue reading If No One Pays for Proof, Everyone Will Pay for the Loss
Allez, commençons par une banalité, en notant que dans la vie ordinaire, comme dans la vie économique, il faut décider sans jamais tout savoir. Toute décision engage une part d’incertitude, donc une part de risque. Certains risques sont petits, absorbables, et on n’y prête même plus attention. D’autres peuvent avoir des conséquences financières telles qu’on préfère les transférer à un tiers, en payant une prime pour qu’un assureur les prenne à sa charge. C’est, au fond, une des fonctions les plus concrètes de l’assurance. Mais une question au moins aussi intéressante surgit lorsque ce transfert devient impossible, ou du moins impossible à un prix raisonnable. C’est ce qu’on appelle “inassurabilité”, que l’on rencontre déjà face à certains risques naturels, lorsque les pertes deviennent trop corrélées, trop massives, trop difficiles à mutualiser, comme je l’évoquais dans Assureurs et IA, un risque systémique et dans Assurer l’IA. Nouveaux risques ? Nouveaux modèles ?. Et désolé d’utiliser ce terme parapluie, “IA”, que je n’aime pas trop, mais je vais simplifier un peu sinon je ne finirais jamais ce billet…
Avant hier, Thomas Claburn écrivait dans AI still doesn’t work very well in business, businesses are faking it, and a reckoning is coming
Another looming problem is that large insurers have become wary of underwriting policies that cover companies against AI risk.
(merci @flomaraninchi et @ugo de me l’avoir pointé) et j’ai l’impression qu’il est important de bien comprendre ce que ça signifie… Si de grands assureurs deviennent réticents à couvrir les usages de l’IA, ce n’est peut être pas simplement une de ces nombreuses anecdotes venant du marché, ni une précaution juridique de plus, c’est probablement un signal. Un signal important quand on fait des modèles prédictifs, qu’on s’intéresse à l’incertitude et à l’ambiguïté, parce que bien souvent, un assureur n’a pas besoin d’être prophète pour devenir méfiant. C’est peut être ce qui s’appelle la culture du risque… Il lui suffit d’estimer qu’il ne comprend pas suffisamment le risque, qu’il ne peut pas l’observer correctement, qu’il ne sait pas en reconstruire la chaîne de responsabilité, ou qu’il doute de pouvoir le porter à un prix soutenable. Autrement dit, si on voit les assureurs reculer, ça devrait nous obliger à nous demander si l’IA est réellement maîtrisée. On touche ici à des questions très classiques d’économie de l’information, de mesure imparfaite, d’incitations mal alignées et de preuve insuffisante, bien plus qu’à une simple querelle sur le niveau actuel de la technologie.
Continue reading Si personne ne paie pour la preuve, tout le monde paiera pour le sinistre