Category Archives: Research

III Congreso Universitario Internacional sobre Seguros y Reaseguros en Perú

In a few hours, I will give a talk at the III Congreso Universitario Internacional sobre Seguros y Reaseguros en Perú,

I will give a talk on Detecting Hidden Bias in Insurance AI Through Counterfactuals

As AI becomes embedded in underwriting, pricing, fraud detection, and claims automation, one challenge remains widely underestimated: models can discriminate without ever using a prohibited variable. Indirect discrimination (i.e., bias transmitted through correlated or downstream variables) poses a subtle but critical risk for insurers, all the more since actuarial science heavily rely on model based on proxy variables. This presentation will explore how causal reasoning and counterfactual thinking can illuminate what standard machine learning methods often obscure. I will begin with intuitive economic decompositions of group disparities, then show how recent advances in optimal-transport-based counterfactuals enable us to ask: “What would this prediction have been if the sensitive attribute had been different?” Drawing on recent framework for causal mediation via sequential optimal transport , we will see how actuaries can break down total model disparities into direct and indirect components (even with complex mediators such as behavioral features, prior claims history, or categorical underwriting variables). The goal of the session is to leave the audience with a clear understanding of where hidden bias may emerge in insurance AI systems, how to diagnose it using modern causal tools, and how these insights support better governance, transparency, and compliance. A forward-looking, accessible presentation to close the day and open new perspectives on fair and responsible AI in insurance.

o (pero no daré mi charla en español) Detección de Sesgos Ocultos en la IA de Seguros con Métodos Contrafactuales

A medida que la inteligencia artificial se integra en la suscripción, la tarificación, la detección de fraude y la automatización de siniestros, surge un desafío que a menudo se subestima: los modelos pueden discriminar sin necesidad de utilizar explícitamente una variable prohibida. La discriminación indirecta —es decir, el sesgo que se transmite a través de variables correlacionadas o situadas aguas abajo— representa un riesgo sutil pero crítico para las aseguradoras, especialmente considerando que la ciencia actuarial depende fuertemente de modelos basados en variables proxy. Esta presentación explorará cómo el razonamiento causal y el pensamiento contrafactual pueden revelar aquello que los métodos tradicionales de machine learning suelen ocultar. Comenzaré con descomposiciones económicas intuitivas de las disparidades entre grupos y luego mostraré cómo los avances recientes en contrafactuales basados en transporte óptimo permiten formular la pregunta: “¿Cuál habría sido la predicción si el atributo sensible hubiera sido diferente?” Basándonos en marcos recientes de mediación causal mediante transporte óptimo secuencial, veremos cómo los actuarios pueden descomponer las disparidades totales de un modelo en componentes directos e indirectos, incluso cuando existen mediadores complejos como variables de comportamiento, historial de siniestros o características categóricas de suscripción. El objetivo de la sesión es ofrecer al público una comprensión clara de dónde pueden surgir sesgos ocultos en los sistemas de IA utilizados en seguros, cómo diagnosticarlos utilizando herramientas causales modernas, y cómo estos enfoques pueden fortalecer la gobernanza, la transparencia y el cumplimiento regulatorio. Una presentación accesible y orientada al futuro, ideal para cerrar la jornada e introducir nuevas perspectivas sobre una inteligencia artificial justa y responsable en el sector asegurador.

Talk at the JFLI, at the NII (国立情報学研究所) in Tokyo (東京)

On my way to Tokyo (東京) for a couple of days, where I was invited to give a talk by the JFLI (Japanese-French Laboratory for Informatics) at the National Institute of Informatics (国立情報学研究所), Room 1810, NII. The talk will be on “Counterfactual and Transport-Based Methods for Understanding Indirect Discrimination in Algorithmic Systems

Understanding disparities between demographic groups in algorithmic predictions remains a central challenge in responsible AI. Classical decomposition methods such as the Kitagawa–Oaxaca–Blinder framework, recently extended to nonlinear and machine-learning settings by Tierney et al. (AAAI 2026), show that observed gaps may arise either from legitimate differences in feature distributions or from structural bias. However, such aggregate decompositions provide limited insight into individual-level counterfactual behaviour. In this talk, I will present recent methodological advances that combine causal reasoning with optimal transport to characterize direct and indirect discriminatory pathways in modern predictive systems. Building on transport-based counterfactuals (Fernandes Machado et al., AAAI 2025; IJCAI 2025), we obtain individual-level counterfactual mediators that respect a given causal graph, including both continuous and categorical variables. This enables a fine-grained decomposition of model disparities into components attributable to causal pathways, beyond what is possible with standard fairness metrics or feature-removal strategies. The presentation will emphasize: the connection between decomposition-based fairness analyses and causal mediation; the construction of transport-based counterfactuals aligned with probabilistic graphical models; and applications showing how indirect discrimination can propagate through proxy variables even when sensitive features are not used. The goal is to give a concise and technically grounded overview of how optimal transport and counterfactual inference can provide interpretable tools for understanding fairness issues in machine-learning models. This talk is intended for researchers interested in causal ML, fairness analysis, and transport-based generative methods.

Decomposing Direct and Indirect Biases in Linear Models under Demographic Parity Constraint

Our paper “Decomposing Direct and Indirect Biases in Linear Models under Demographic Parity Constraint“, with Bertille Tierny and François Hu is now online on ArXiv.

Linear models are widely used in high-stakes decision-making due to their simplicity and interpretability. Yet when fairness constraints such as demographic parity are introduced, their effects on model coefficients, and thus on how predictive bias is distributed across features, remain opaque. Existing approaches on linear models often rely on strong and unrealistic assumptions, or overlook the explicit role of the sensitive attribute, limiting their practical utility for fairness assessment. We extend the work of (Chzhen and Schreuder, 2022) and (Fukuchi and Sakuma, 2023) by proposing a post-processing framework that can be applied on top of any linear model to decompose the resulting bias into direct (sensitive-attribute) and indirect (correlated-features) components. Our method analytically characterizes how demographic parity reshapes each model coefficient, including those of both sensitive and non-sensitive features. This enables a transparent, feature-level interpretation of fairness interventions and reveals how bias may persist or shift through correlated variables. Our framework requires no retraining and provides actionable insights for model auditing and mitigation. Experiments on both synthetic and real-world datasets demonstrate that our method captures fairness dynamics missed by prior work, offering a practical and interpretable tool for responsible deployment of linear models.

On sera à Singapour pour le présenter fin janvier, à AAAI 2026, 40th Annual AAAI Conference on Artificial Intelligence.

Exposé Chaire PARI, les trois piliers de l’équité en tarification

Mercredi matin (en France), après-midi (à Kyoto), nuit (à Montréal), je vais donner un exposé pour le séminaire mensuel de la chaire PARI, intitulé Un cadre de gouvernance à trois piliers pour une tarification équitable de l’assurance. J’y présenterais notre récent travail, publié justement par la Chaire PARI (document de travail 37), et également associé à un rapport publié par la Casualty Actuarial Society aux États-Unis (et présenté la semaine dernière par Olivier), A Scalable toolbox for exposing indirect discrimination in insurance rates”. Les slides sont en ligne.

Disentangled Deep Smoothed Bootstrap for Fair Imbalanced Regression

Our paper, Disentangled Deep Smoothed Bootstrap for Fair Imbalanced Regression, with Samuel Stocksieker and Denys Pommeret has been published in Procedia Computer Science

Imbalanced distribution learning is a common and significant challenge in predictive modeling, often reducing the performance of standard algorithms. Although various approaches address this issue, most are tailored to classification problems, with a limited focus on regression. This paper introduces a novel method to improve learning on tabular data within the Imbalanced Regression (IR) framework, which is a critical problem. We propose using Variational Autoencoders (VAEs) to model and define a latent representation of data distributions. However, VAEs can be inefficient with imbalanced data like other standard approaches. To address this, we develop an innovative data generation method that combines a disentangled VAE with a Smoothed Bootstrap applied in the latent space. We evaluate the efficiency of this method through numerical comparisons with competitors on benchmark datasets for IR.

KurtHGR: A Neural Maximal Correlation for Tabular Datasets

Our paper, KurtHGR: A Neural Maximal Correlation for Tabular Datasets, with Samuel Stocksieker and Denys Pommeret has been published in Procedia Computer Science

The study of dependencies between variables is a fundamental pillar of machine learning, influencing areas as diverse as feature selection, fairness, dimensionality reduction, and multimodal learning. Among nonlinear correlation measures, the Hirschfeld-Gebelein-Rényi (HGR) maximal correlation stands out for its universality and remarkable theoretical properties. Defined as the maximum achievable correlation between nonlinear transformations of two random variables, it provides an intrinsic quantification of statistical dependence, regardless of their marginal distributions. However, despite its theoretical potential, its practical adoption still faces several challenges. In this paper, we present a new approach called KurtHGR, dedicated to the estimation of the bivariate nonlinear correlation matrix of a set of variables. We show that this solution is effective in detecting nonlinear correlations, robust to noise, and computationally efficient, thanks to a neural architecture specifically designed for this purpose. We evaluate its performance through numerical illustrations and feature selection experiments, where we demonstrate that KurtHGR empirically outperforms state-of-the-art approaches.

GCKE 2025, Osaka, Annual Global Congress of Knowledge Economy

This week, I will attend the 10th Annual Global Congress of Knowledge Economy, in Osaka (大阪). On Wednesday morning, I will chair the GCKE04 session, Economic Governance & Sustainable Development. I will also give a talk on “Fairness and Insurance: Disentangling Illegitimate and Indirect Discriminations” (slides are available). It is based on recent work with Oliver and Marie-Pier Côté.

2025 CAS (Casualty Actuarial Science) Canada Connection

In less than a month, Olivier Côté will attend the  CAS Canada Connection, in Toronto.  He will speak in a session Operationalizing Fairness in Actuarial Pricing: From Principle to Practice

Fairness metrics often lack actuarial relevance and are expressed in abstract units, obscuring real-world consequences. For actuaries to intervene, proxy effects and unfair biases must be quantified in insurance-relevant terms: dollars and people. This session will present new research from the CAS Race and Insurance Pricing series, focusing on the unique challenge of establishing fairness in actuarial pricing. We argue that actuarial fairness, solidarity, and causality form the three dimensions of fairness in insurance. These give rise to a five-point spectrum of pricing benchmarks, each reflecting distinct fairness goals and trade-offs. We quantify the monetary impact of unfairness at both the policyholder and segment levels through a large-scale Québec auto insurance case study.

Learning objectives are (1) Describe three dimensions of fairness in insurance pricing: actuarial fairness, solidarity, and causality (2) Translate these dimensions of fairness into a spectrum of five pricing benchmarks (3) Diagnose and quantify potential unfairness at both individual and segment levels using actuarially meaningful metrics.

It will be based on our recent paper, A Scalable toolbox for exposing indirect discrimination in insurance rates”.

Functional Analysis of Loss-development Patterns in P&C Insurance

Our paper, “Functional Analysis of Loss-development Patterns in P&C Insurance” written with Qiheng (Steve) Guo and Mike Ludkovski, is now available on ArXiv.

We analyze loss development in NAIC Schedule P loss triangles using functional data analysis methods. Adopting the functional viewpoint, our dataset comprises 3300+ curves of incremental loss ratios (ILR) of workers’ compensation lines over 24 accident years. Relying on functional data depth, we first study similarities and differences in development patterns based on company-specific covariates, as well as identify anomalous ILR curves. The exploratory findings motivate the probabilistic forecasting framework developed in the second half of the paper. We propose a functional model to complete partially developed ILR curves based on partial least squares regression of PCA scores. Coupling the above with functional bootstrapping allows us to quantify future ILR uncertainty jointly across all future lags. We demonstrate that our method has much better probabilistic scores relative to Chain Ladder and in particular can provide accurate functional predictive intervals.

A Scalable toolbox for exposing indirect discrimination in insurance rates

Our paper, A Scalable toolbox for exposing indirect discrimination in insurance rates”, with Olivier and Marie-Pier Côté, is finally out. It is published as a CAS (Casualty Actuarial Society) Working Papers.

According to actuarial standards of practice, insurance pricing relies on grouping policyholders by risk to set adequate premiums. Modern predictive models, especially machine learning, excel at detecting statistical associations to differentiate risks, but they can learn spurious or undesired correlations. This raises concerns when socioeconomic or demographic factors may (intentionally or inadvertently) affect the fairness of insurance pricing.
Fairness in insurance is difficult to operationalize due to its ambiguity. Fairness metrics from the machine learning literature lack the segment-specific relevance actuaries require and are expressed in abstract units that obscure real-world consequences. For actuaries to intervene, proxy effects and unfair biases must be quantified in insurance-relevant terms: dollars and people.
In this paper, we focus on fairness in actuarial pricing. We study the situation where insurance rates should be fair with respect to a categorical (or discretized) sensitive variable, such as race or economic status, and the latter is fully observed (despite the possible privacy challenges).

  • We argue that actuarial fairness, solidarity, and causality form the three core dimensions of fairness in insurance pricing:
    – Actuarial fairness aligns premiums with expected losses, mitigating cross-subsidies,
    – Solidarity aligns premiums across protected groups, mitigating disparities,
    – Causality ensures models capture only true risk factors, mitigating proxy effects.
  • We translate these dimensions into a five-point spectrum of premiums:
    – The best-estimate premium is the most accurate predictor of losses using all available information, including the sensitive variable,
    – The unaware premium is the most accurate predictor of losses using all information except the sensitive variable,
    – The aware premium is the most accurate predictor of losses when controlling for the sensitive variable,
    – The corrective premium is the most accurate predictor that enforces similar premium distributions across levels of the sensitive variable,
    – The hyperaware premium is the most accurate approximation of the corrective premium that does not directly discriminate on the sensitive variable.
  • We define actuarially relevant local metrics that quantify the potential monetary impact of unfairness at the policyholder level. Proxy vulnerability is the difference between unaware and aware premiums. It locally measures how much the allowed variables pick up the signal of a missing sensitive variable We define post pricing local metrics to evaluate the fairness of any pricing structure relative to the estimated spectrum.
  • We partition policyholders to expose the segments in which unfair discrimination is most severe.
  • We integrate these components into a fairness assessment framework that partitions the policyholders, pinpoints segments most affected by unfairness, and evaluates
    local metrics to diagnose unfairness and guide intervention.
  • We illustrate our approach with a large case study inspired by industry practice. The analysis relies a real dataset of approximately 768,000 vehicles insured in Québec
    (2016–2017), covering at-fault material damage claims. We examine the fairness of a pseudo commercial price with respect to discretized credit score: low (vulnerable group) vs high. This sensitive variable measures the policyholder’s economic precariousness.
    – Proxy vulnerability is both material and skewed: while most policyholders may receive a modest rebate, a vulnerable minority of them could face 15–30% overpricing if the regulation only requires that the sensitive variable be omitted,
    – Our integrated framework illustrates that fairness in insurance pricing can be assessed efficiently, with minimal analyst effort. The framework provides simultaneous diagnostics from the three fairness dimensions, translates
    unfairness into dollar terms at the individual level, and highlights disparities across population segments.
  • We provide additional information and the complete code illustrated on a comprehensive simulated data example in the online supplementary material.

Designed for routine portfolio monitoring, our toolbox delivers valuable insights whether or not the sensitive attribute is included in pricing, provided it is available for assessment. The toolbox’s scalability, across large datasets and rich covariate sets, makes fairness operationalizable for actuaries: intuitive, practical, and encompassing the three fairness dimensions.

Un cadre de gouvernance à trois piliers pour une tarification équitable de l’assurance

Avec Marie-Pier et Olivier Côté, on a écrit un court article, un cadre de gouvernance à trois piliers pour une tarification équitable de l’assurance, publié par la chaire PARI.

L’assurance repose sur l’équilibre entre le risque individuel et la protection collective, mais les modèles contemporains de tarification fondés sur des données massives et des algorithmes opaques soulèvent des questions pressantes d’équité à l’égard des caractéristiques protégées prédéfinies. Alors que les standards actuariels visent une précision fondée sur le risque, les parties prenantes demandent de plus en plus une reddition de comptes et une responsabilisation éthique, une solidarité sociale et une protection contre la discrimination insidieuse. Nous soutenons que l’équité actuarielle, la solidarité et la causalité forment trois piliers distincts, complémentaires et essentiels pour une tarification équitable de l’assurance. Nous situons ces piliers dans des débats plus larges en éthique des affaires et en équité algorithmique, en les reliant aux traditions de justice distributive (Rawls, 1971; Sen, 1992), à l’éthique de l’information (Floridi, 2016; Nissenbaum, 2009), et à la théorie du partage de risque Arrow (1963). Nous soutenons que les trois piliers rendent explicites les compromis éthiques auxquels actuaires et assureurs sont confrontés lorsqu’ils déploient des modèles prédictifs. Aucun principe d’équité ne peut dominer sans détériorer les autres : l’équité actuarielle peut accentuer les disparités socioéconomiques, la solidarité peut compromettre l’efficience du marché, et la causalité, tout en cherchant de véritables effets de risque sans regard à la solidarité ou l’équité actuarielle, repose sur des postulats invérifiables qui peuvent entraver la puissance prédictive. En articulant ce cadre tridimensionnel, nous déplaçons l’équité d’une hypothèse implicite vers un objectif explicite de gouvernance, fournissant ainsi une perspective normative pour la gouvernance d’entreprise, l’élaboration de la réglementation et la reddition de comptes envers les parties prenantes dans l’industrie de l’assurance. Au-delà de la science actuarielle, ces trois piliers offrent un cadre généralisable pour évaluer l’équité dans d’autres domaines de décision algorithmique fondée sur le risque, de l’évaluation du pointage de crédit à la tarification des soins de santé.

Une version plus statistique, ou actuarielle, sera bientôt en ligne. Et je donnerai un exposé à la chaire PARI pour présenter ce papier.

Linear Risk Sharing on Networks

Our paper, Linear Risk Sharing on Networks, written with Philipp Ratz is now available on https://arxiv.org/abs/2509.21411

Over the past decade alternatives to traditional insurance and banking have grown in popularity. The desire to encourage local participation has lead products such as peer-to-peer insurance, reciprocal contracts, and decentralized finance platforms to increasingly rely on network structures to redistribute risk among participants. In this paper, we develop a comprehensive framework for linear risk sharing (LRS), where random losses are reallocated through nonnegative linear operators which can accommodate a wide range of networks. Building on the theory of stochastic and doubly stochastic matrices, we establish conditions under which constraints such as budget balance, fairness, and diversification are guaranteed. The convex order framework allows us to compare different allocations rigorously, highlighting variance reduction and majorization as natural consequences of doubly stochastic mixing. We then extend the analysis to network-based sharing, showing how their topology shapes risk outcomes in complete, star, ring, random, and scale-free graphs. A second layer of randomness, where the sharing matrix itself is random, is introduced via Erdős–Rényi and preferential-attachment networks, connecting risk-sharing properties to degree distributions. Finally, we study convex combinations of identity and network-induced operators, capturing the trade-off between self-retention and diversification. Our results provide design principles for fair and efficient peer-to-peer insurance and network-based risk pooling, combining mathematical soundness with economic interpretability.

Certitudes collectives et incertitudes individuelles, les données massives changent-elles la donne ?

Il y a un peu plus d’un an, je participais aux rencontres de Cerisy, et on m’avait demandé de mettre par écrit mon intervention orale. C’est maintenant chose faite… avec comme titre Certitudes collectives et incertitudes individuelles, les données massives changent-elles la donne ?

Depuis plus de deux siècles, les sciences sociales se heurtent à un paradoxe tenace : si les comportements individuels sont incertains, contingents et souvent imprévisibles, leur agrégation produit des régularités collectives d’une étonnante stabilité. De Quetelet à Durkheim, en passant par Weber, cette tension entre incertitude individuelle et certitude collective a nourri la constitution même d’une science sociale quantitative. L’ère des données massives redonne une actualité brûlante à ce paradoxe. Jamais les sociétés humaines n’ont généré autant de traces numériques : achats, communications, déplacements, interactions en ligne, données physiologiques. Ces flux continus, enregistrés et analysés à grande échelle, alimentent l’espoir d’une prévisibilité quasi totale. Certains annoncent la fin de l’incertitude : les algorithmes sauraient anticiper nos choix de consommation, les crises sanitaires ou les évolutions politiques. Mais cette promesse soulève des questions épistémologiques et sociales profondes : que signifie appliquer une probabilité à un événement singulier ? Jusqu’où un score algorithmique peut-il être considéré comme fiable, juste ou légitime ? L’objectif de ce texte est d’examiner, à nouveaux frais, le rapport entre incertitude individuelle et certitude collective à l’ère du big data. Pour ce faire, nous parcourrons plusieurs étapes : d’abord, revenir aux fondements historiques de la découverte des régularités collectives ; ensuite, montrer comment la rationalité limitée, loin de contredire la prévisibilité, contribue à l’émergence de modèles robustes ; puis analyser les débats contemporains sur l’interprétation des probabilités et la nécessité de calibrer les scores ; enfin, explorer la dimension temporelle de la prévision, du nowcasting instantané aux projections climatiques de long terme. Notre thèse est que les données massives ne résolvent pas le paradoxe, mais en amplifient la portée et les enjeux. Elles déplacent le débat d’un plan strictement scientifique vers un plan éthique et politique : comment gouverner l’incertitude, dans un monde où elle est à la fois plus visible, plus mesurable, mais aussi plus contestable ?

La suite est en ligne sur https://hal.science/hal-05250596

L’impossible droit à l’erreur, l’impossible droit à l’oubli ?

« L’information publique est comme du dentifrice ; une fois sortie du tube, impossible de la faire rentrer » (Doyle, 2010). À l’ère des mégadonnées, chaque action laisse une trace qui s’agrège aux autres pour bâtir des profils de risque, des scores de crédit ou des prédictions médicales. Pourtant, la sagesse japonaise nous dit qu’« on tombe sept fois, on se relève huit » (七転び八起き), rappelant qu’on apprend de ses erreurs. Mais ceci n’est possible que si ces dernières ne définissent pas irréversiblement l’individu.

Continue reading L’impossible droit à l’erreur, l’impossible droit à l’oubli ?