Category Archives: Publications

Faut-il socialiser les risques ou responsabiliser les territoires ?

Publication d’un court article, écrit avec Laurence Barry, en ligne sur le site de la Revue Banque,

Le dérèglement climatique, qui s’accompagne de l’intensification des phénomènes extrêmes, prend de l’ampleur à un moment où les données disponibles concernant ces événements se multiplient à une maille de plus en plus fine. De plus, dans certains pays, et notamment en France, des stress-tests climatiques mis en place ces dernières années ont contribué à une montée en capacité des compagnies d’assurance sur ces modèles.

(à suivre…)

Decomposing Probabilistic Scores

Our paper Decomposing Probabilistic Scores: Reliability, Information Loss and Uncertainty, with Agathe Fernandes-Machado, is now available https://doi.org/10.48550/arXiv.2603.15232

Calibration is a conditional property that depends on the information retained by a predictor. We develop decomposition identities for arbitrary proper losses that make this dependence explicit. At any information level \mathcal{A}, the expected loss of an \mathcal{A}-measurable predictor splits into a proper-regret (reliability) term and a conditional entropy (residual uncertainty) term. For nested levels \mathcal{A}\subset\mathcal{B}, a chain decomposition quantifies the information gain from \mathcal{A} to \mathcal{B}. Applied to classification with features \boldsymbol{X} and score S=s(\boldsymbol{X}), this yields a three-term identity: miscalibration, a {\em grouping} term measuring information loss from \boldsymbol{X}  to {S}, and irreducible uncertainty at the feature level. We leverage the framework to analyze post-hoc recalibration, aggregation of calibrated models, and stagewise/boosting constructions, with explicit forms for Brier and log-loss.

Sequential Transport for Causal Mediation Analysis

Our paper, Sequential Transport for Causal Mediation Analysis, with Agathe Fernandes-Machado, Iryna Voitsitska and Ewen Gallic, is now available on https://arxiv.org/abs/2603.15182

We propose sequential transport (ST), a distributional framework for mediation analysis that combines optimal transport (OT) with a mediator directed acyclic graph (DAG). Instead of relying on cross-world counterfactual assumptions, ST constructs unit-level mediator counterfactuals by minimally transporting each mediator, either marginally or conditionally, toward its distribution under an alternative treatment while preserving the causal dependencies encoded by the DAG. For numerical mediators, ST uses monotone (conditional) OT maps based on conditional CDF/quantile estimators; for categorical mediators, it extends naturally via simplex-based transport. We establish consistency of the estimated transport maps and of the induced unit-level decompositions into mutatis mutandis direct and indirect effects under standard regularity and support conditions. When the treatment is randomized or ignorable (possibly conditional on covariates), these decompositions admit a causal interpretation; otherwise, they provide a principled distributional attribution of differences between groups aligned with the mediator structure. Gaussian examples show that ST recovers classical mediation formulas, while additional simulations confirm good performance in nonlinear and mixed-type settings. An application to the COMPAS dataset illustrates how ST yields deterministic, DAG-consistent counterfactual mediators and a fine-grained mediator-level attribution of disparities.

Mesurer l’équité “globale” quand les données sont dispersées

Avec Agathe Fernandes Machado, Olivier Côté et François Hu, on a mis en ligne un papier, Federated Measurement of Demographic Disparities from Quantile Sketches. Le point de départ est assez simple. On peut imaginer qu’un modèle de score (mesurant un risque de récidive, une probabilité de réadmission à l’hôpital, un score de crédit…) soit déployé dans plusieurs institutions : hôpitaux, tribunaux, banques, assureurs. Chacun collecte ses données, utilise un modèle, et conserve jalousement ses bases. Parfois par obligation légale (RGPD, secret médical), parfois par contraintes techniques, parfois par réticence organisationnelle. Le problème, c’est qu’un régulateur veut savoir si le score est discriminatoire, sans jamais centraliser les données brutes. En fait, c’est assez réaliste comme situation, beaucoup d’objectifs de justice algorithmique étant définis au niveau de la population, et pas localement. Les régulateurs et les directions conformité demandent : “Est-ce que le système, globalement, traite de la même façon les groupes protégés ?” Pas : “Chaque institution, isolément, a-t-elle l’air correcte ?” On montre dans notre article que des audits locaux peuvent être rassurants tout en étant trompeurs, parce que l’injustice peut naître précisément de ce que l’on ne voit pas en restant silo par silo. Et la bonne nouvelle, c’est qu’on peut estimer l’inéquité globale avec une communication très limitée, en demandant à chaque silo seulement des comptages et quelques quantiles de ses scores.

Dans le papier, on identifie deux sources majeures de décalage entre l’audit local et l’audit global.

Les effets de composition : une “version fairness” du paradoxe de Simpson. Même si chaque silo semble traiter les groupes de manière similaire, la répartition des groupes entre silos peut être très différente. Un groupe peut être sur-représenté dans certains hôpitaux, certains tribunaux, certaines zones géographiques… Et si ces silos n’ont pas le même profil de scores (parce que les populations, les pratiques ou les contextes diffèrent), alors l’agrégation peut créer un écart global qui n’apparaît nulle part localement.

L’hétérogénéité inter-silos : la “stratification cachée”  Deux silos peuvent produire des scores d’allure différente (distribution plus “optimiste”, plus “pessimiste”, plus dispersée…), même au sein d’un même groupe sensible. Localement, chacun peut avoir des métriques acceptables. Mais une fois les données mises bout à bout, ces différences deviennent visibles et peuvent amplifier une disparité entre groupes. Dans les domaines sensibles (santé, justice pénale), cette hétérogénéité est courante : pratiques de codage, accès aux soins, critères de triage, politiques locales…

D’un point de vue pratique, on propose un protocole d’audit en un seul aller-retour : chaque silo envoie, pour chaque groupe sensible le nombre d’individus dans ce groupe (un simple comptage), k quantiles du score (par exemple k = 25, 50 ou 100), sur une grille commune. Et c’est tout. Pas de scores individuels, pas de features, pas d’exemples. Ce genre de résumé est déjà produit par de nombreux systèmes de monitoring via des quantile sketches utilisés pour suivre des distributions. À partir de ces quantiles, le serveur peut reconstruire une approximation des distributions globales par groupe, puis calculer la disparité populationnelle. Théoriquement, l’erreur due à la discrétisation décroît comme 1/k : plus on envoie de quantiles, plus la courbe reconstruite est fine. Et on montre sur des données réelles que quelques dizaines suffisent.

En bonus, on propose aussi une méthode permettant de comprendre, si on mesure un écart, pourquoi il apparaît. On obtient en particulier une décomposition de type ANOVA qui sépare : une part due aux effets de mélange / composition (le “Simpson fairness”), une part due à la vraie hétérogénéité inter-silos (différences structurelles de score), et un terme d’interaction qui peut amplifier ou compenser (mais reste contrôlé).

Bref, on montre qu’en environnement fédéré, l’équité populationnelle n’est pas la moyenne de l’équité locale. Elle dépend des mélanges, des flux, des biais d’affectation et des variations inter-silos. Donc la bonne question n’est pas “chaque silo est-il juste ?”, mais “le système fédéré, en tant que mécanisme de production de scores, est-il juste au niveau population ?” La bonne nouvelle, c’est qu’on peut répondre à cette question sans centraliser les données, en ne partageant qu’une poignée de quantiles et des comptes, en une seule communication.

Beyond Procedure: Substantive Fairness in Conformal Prediction

Our paper, Beyond Procedure: Substantive Fairness in Conformal Prediction, with Pengqi Liu, Zijun Yu, Mouloud Belbahri, Masoud Asgharian, and Jesse Cresswell, is now available on https://arxiv.org/abs/2602.16794

Conformal prediction (CP) offers distribution-free uncertainty quantification for machine learning models, yet its interplay with fairness in downstream decision-making remains underexplored. Moving beyond CP as a standalone operation (procedural fairness), we analyze the holistic decision-making pipeline to evaluate substantive fairness-the equity of downstream outcomes. Theoretically, we derive an upper bound that decomposes prediction-set size disparity into interpretable components, clarifying how label-clustered CP helps control method-driven contributions to unfairness. To facilitate scalable empirical analysis, we introduce an LLM-in-the-loop evaluator that approximates human assessment of substantive fairness across diverse modalities. Our experiments reveal that label-clustered CP variants consistently deliver superior substantive fairness. Finally, we empirically show that equalized set sizes, rather than coverage, strongly correlate with improved substantive fairness, enabling practitioners to design more fair CP systems. Our code is available at this https URL.

Decomposing Direct and Indirect Biases in Linear Models under Demographic Parity Constraint

Our paper “Decomposing Direct and Indirect Biases in Linear Models under Demographic Parity Constraint“, with Bertille Tierny and François Hu is now online on ArXiv.

Linear models are widely used in high-stakes decision-making due to their simplicity and interpretability. Yet when fairness constraints such as demographic parity are introduced, their effects on model coefficients, and thus on how predictive bias is distributed across features, remain opaque. Existing approaches on linear models often rely on strong and unrealistic assumptions, or overlook the explicit role of the sensitive attribute, limiting their practical utility for fairness assessment. We extend the work of (Chzhen and Schreuder, 2022) and (Fukuchi and Sakuma, 2023) by proposing a post-processing framework that can be applied on top of any linear model to decompose the resulting bias into direct (sensitive-attribute) and indirect (correlated-features) components. Our method analytically characterizes how demographic parity reshapes each model coefficient, including those of both sensitive and non-sensitive features. This enables a transparent, feature-level interpretation of fairness interventions and reveals how bias may persist or shift through correlated variables. Our framework requires no retraining and provides actionable insights for model auditing and mitigation. Experiments on both synthetic and real-world datasets demonstrate that our method captures fairness dynamics missed by prior work, offering a practical and interpretable tool for responsible deployment of linear models.

On sera à Singapour pour le présenter fin janvier, à AAAI 2026, 40th Annual AAAI Conference on Artificial Intelligence.

Disentangled Deep Smoothed Bootstrap for Fair Imbalanced Regression

Our paper, Disentangled Deep Smoothed Bootstrap for Fair Imbalanced Regression, with Samuel Stocksieker and Denys Pommeret has been published in Procedia Computer Science

Imbalanced distribution learning is a common and significant challenge in predictive modeling, often reducing the performance of standard algorithms. Although various approaches address this issue, most are tailored to classification problems, with a limited focus on regression. This paper introduces a novel method to improve learning on tabular data within the Imbalanced Regression (IR) framework, which is a critical problem. We propose using Variational Autoencoders (VAEs) to model and define a latent representation of data distributions. However, VAEs can be inefficient with imbalanced data like other standard approaches. To address this, we develop an innovative data generation method that combines a disentangled VAE with a Smoothed Bootstrap applied in the latent space. We evaluate the efficiency of this method through numerical comparisons with competitors on benchmark datasets for IR.

KurtHGR: A Neural Maximal Correlation for Tabular Datasets

Our paper, KurtHGR: A Neural Maximal Correlation for Tabular Datasets, with Samuel Stocksieker and Denys Pommeret has been published in Procedia Computer Science

The study of dependencies between variables is a fundamental pillar of machine learning, influencing areas as diverse as feature selection, fairness, dimensionality reduction, and multimodal learning. Among nonlinear correlation measures, the Hirschfeld-Gebelein-Rényi (HGR) maximal correlation stands out for its universality and remarkable theoretical properties. Defined as the maximum achievable correlation between nonlinear transformations of two random variables, it provides an intrinsic quantification of statistical dependence, regardless of their marginal distributions. However, despite its theoretical potential, its practical adoption still faces several challenges. In this paper, we present a new approach called KurtHGR, dedicated to the estimation of the bivariate nonlinear correlation matrix of a set of variables. We show that this solution is effective in detecting nonlinear correlations, robust to noise, and computationally efficient, thanks to a neural architecture specifically designed for this purpose. We evaluate its performance through numerical illustrations and feature selection experiments, where we demonstrate that KurtHGR empirically outperforms state-of-the-art approaches.

Functional Analysis of Loss-development Patterns in P&C Insurance

Our paper, “Functional Analysis of Loss-development Patterns in P&C Insurance” written with Qiheng (Steve) Guo and Mike Ludkovski, is now available on ArXiv.

We analyze loss development in NAIC Schedule P loss triangles using functional data analysis methods. Adopting the functional viewpoint, our dataset comprises 3300+ curves of incremental loss ratios (ILR) of workers’ compensation lines over 24 accident years. Relying on functional data depth, we first study similarities and differences in development patterns based on company-specific covariates, as well as identify anomalous ILR curves. The exploratory findings motivate the probabilistic forecasting framework developed in the second half of the paper. We propose a functional model to complete partially developed ILR curves based on partial least squares regression of PCA scores. Coupling the above with functional bootstrapping allows us to quantify future ILR uncertainty jointly across all future lags. We demonstrate that our method has much better probabilistic scores relative to Chain Ladder and in particular can provide accurate functional predictive intervals.

A Scalable toolbox for exposing indirect discrimination in insurance rates

Our paper, A Scalable toolbox for exposing indirect discrimination in insurance rates, with Olivier and Marie-Pier Côté, is finally out. It is published as a CAS (Casualty Actuarial Society) Working Papers.

According to actuarial standards of practice, insurance pricing relies on grouping policyholders by risk to set adequate premiums. Modern predictive models, especially machine learning, excel at detecting statistical associations to differentiate risks, but they can learn spurious or undesired correlations. This raises concerns when socioeconomic or demographic factors may (intentionally or inadvertently) affect the fairness of insurance pricing.
Fairness in insurance is difficult to operationalize due to its ambiguity. Fairness metrics from the machine learning literature lack the segment-specific relevance actuaries require and are expressed in abstract units that obscure real-world consequences. For actuaries to intervene, proxy effects and unfair biases must be quantified in insurance-relevant terms: dollars and people.
In this paper, we focus on fairness in actuarial pricing. We study the situation where insurance rates should be fair with respect to a categorical (or discretized) sensitive variable, such as race or economic status, and the latter is fully observed (despite the possible privacy challenges).

  • We argue that actuarial fairness, solidarity, and causality form the three core dimensions of fairness in insurance pricing:
    – Actuarial fairness aligns premiums with expected losses, mitigating cross-subsidies,
    – Solidarity aligns premiums across protected groups, mitigating disparities,
    – Causality ensures models capture only true risk factors, mitigating proxy effects.
  • We translate these dimensions into a five-point spectrum of premiums:
    – The best-estimate premium is the most accurate predictor of losses using all available information, including the sensitive variable,
    – The unaware premium is the most accurate predictor of losses using all information except the sensitive variable,
    – The aware premium is the most accurate predictor of losses when controlling for the sensitive variable,
    – The corrective premium is the most accurate predictor that enforces similar premium distributions across levels of the sensitive variable,
    – The hyperaware premium is the most accurate approximation of the corrective premium that does not directly discriminate on the sensitive variable.
  • We define actuarially relevant local metrics that quantify the potential monetary impact of unfairness at the policyholder level. Proxy vulnerability is the difference between unaware and aware premiums. It locally measures how much the allowed variables pick up the signal of a missing sensitive variable We define post pricing local metrics to evaluate the fairness of any pricing structure relative to the estimated spectrum.
  • We partition policyholders to expose the segments in which unfair discrimination is most severe.
  • We integrate these components into a fairness assessment framework that partitions the policyholders, pinpoints segments most affected by unfairness, and evaluates
    local metrics to diagnose unfairness and guide intervention.
  • We illustrate our approach with a large case study inspired by industry practice. The analysis relies a real dataset of approximately 768,000 vehicles insured in Québec
    (2016–2017), covering at-fault material damage claims. We examine the fairness of a pseudo commercial price with respect to discretized credit score: low (vulnerable group) vs high. This sensitive variable measures the policyholder’s economic precariousness.
    – Proxy vulnerability is both material and skewed: while most policyholders may receive a modest rebate, a vulnerable minority of them could face 15–30% overpricing if the regulation only requires that the sensitive variable be omitted,
    – Our integrated framework illustrates that fairness in insurance pricing can be assessed efficiently, with minimal analyst effort. The framework provides simultaneous diagnostics from the three fairness dimensions, translates
    unfairness into dollar terms at the individual level, and highlights disparities across population segments.
  • We provide additional information and the complete code illustrated on a comprehensive simulated data example in the online supplementary material.

Designed for routine portfolio monitoring, our toolbox delivers valuable insights whether or not the sensitive attribute is included in pricing, provided it is available for assessment. The toolbox’s scalability, across large datasets and rich covariate sets, makes fairness operationalizable for actuaries: intuitive, practical, and encompassing the three fairness dimensions.

Un cadre de gouvernance à trois piliers pour une tarification équitable de l’assurance

Avec Marie-Pier et Olivier Côté, on a écrit un court article, un cadre de gouvernance à trois piliers pour une tarification équitable de l’assurance, publié par la chaire PARI.

L’assurance repose sur l’équilibre entre le risque individuel et la protection collective, mais les modèles contemporains de tarification fondés sur des données massives et des algorithmes opaques soulèvent des questions pressantes d’équité à l’égard des caractéristiques protégées prédéfinies. Alors que les standards actuariels visent une précision fondée sur le risque, les parties prenantes demandent de plus en plus une reddition de comptes et une responsabilisation éthique, une solidarité sociale et une protection contre la discrimination insidieuse. Nous soutenons que l’équité actuarielle, la solidarité et la causalité forment trois piliers distincts, complémentaires et essentiels pour une tarification équitable de l’assurance. Nous situons ces piliers dans des débats plus larges en éthique des affaires et en équité algorithmique, en les reliant aux traditions de justice distributive (Rawls, 1971; Sen, 1992), à l’éthique de l’information (Floridi, 2016; Nissenbaum, 2009), et à la théorie du partage de risque Arrow (1963). Nous soutenons que les trois piliers rendent explicites les compromis éthiques auxquels actuaires et assureurs sont confrontés lorsqu’ils déploient des modèles prédictifs. Aucun principe d’équité ne peut dominer sans détériorer les autres : l’équité actuarielle peut accentuer les disparités socioéconomiques, la solidarité peut compromettre l’efficience du marché, et la causalité, tout en cherchant de véritables effets de risque sans regard à la solidarité ou l’équité actuarielle, repose sur des postulats invérifiables qui peuvent entraver la puissance prédictive. En articulant ce cadre tridimensionnel, nous déplaçons l’équité d’une hypothèse implicite vers un objectif explicite de gouvernance, fournissant ainsi une perspective normative pour la gouvernance d’entreprise, l’élaboration de la réglementation et la reddition de comptes envers les parties prenantes dans l’industrie de l’assurance. Au-delà de la science actuarielle, ces trois piliers offrent un cadre généralisable pour évaluer l’équité dans d’autres domaines de décision algorithmique fondée sur le risque, de l’évaluation du pointage de crédit à la tarification des soins de santé.

Une version plus statistique, ou actuarielle, sera bientôt en ligne. Et je donnerai un exposé à la chaire PARI pour présenter ce papier.

Linear Risk Sharing on Networks

Our paper, Linear Risk Sharing on Networks, written with Philipp Ratz is now available on https://arxiv.org/abs/2509.21411

Over the past decade alternatives to traditional insurance and banking have grown in popularity. The desire to encourage local participation has lead products such as peer-to-peer insurance, reciprocal contracts, and decentralized finance platforms to increasingly rely on network structures to redistribute risk among participants. In this paper, we develop a comprehensive framework for linear risk sharing (LRS), where random losses are reallocated through nonnegative linear operators which can accommodate a wide range of networks. Building on the theory of stochastic and doubly stochastic matrices, we establish conditions under which constraints such as budget balance, fairness, and diversification are guaranteed. The convex order framework allows us to compare different allocations rigorously, highlighting variance reduction and majorization as natural consequences of doubly stochastic mixing. We then extend the analysis to network-based sharing, showing how their topology shapes risk outcomes in complete, star, ring, random, and scale-free graphs. A second layer of randomness, where the sharing matrix itself is random, is introduced via Erdős–Rényi and preferential-attachment networks, connecting risk-sharing properties to degree distributions. Finally, we study convex combinations of identity and network-induced operators, capturing the trade-off between self-retention and diversification. Our results provide design principles for fair and efficient peer-to-peer insurance and network-based risk pooling, combining mathematical soundness with economic interpretability.

L’impossible droit à l’erreur, l’impossible droit à l’oubli ?

« L’information publique est comme du dentifrice ; une fois sortie du tube, impossible de la faire rentrer » (Doyle, 2010). À l’ère des mégadonnées, chaque action laisse une trace qui s’agrège aux autres pour bâtir des profils de risque, des scores de crédit ou des prédictions médicales. Pourtant, la sagesse japonaise nous dit qu’« on tombe sept fois, on se relève huit » (七転び八起き), rappelant qu’on apprend de ses erreurs. Mais ceci n’est possible que si ces dernières ne définissent pas irréversiblement l’individu.

Continue reading L’impossible droit à l’erreur, l’impossible droit à l’oubli ?

Disentangled Deep Smoothed Bootstrap for Fair Imbalanced Regression

Our paper, “Disentangled Deep Smoothed Bootstrap for Fair Imbalanced Regression” is now available on ArXiv

Imbalanced distribution learning is a common and significant challenge in predictive modeling, often reducing the performance of standard algorithms. Although various approaches address this issue, most are tailored to classification problems, with a limited focus on regression. This paper introduces a novel method to improve learning on tabular data within the Imbalanced Regression (IR) framework, which is a critical problem. We propose using Variational Autoencoders (VAEs) to model and define a latent representation of data distributions. However, VAEs can be inefficient with imbalanced data like other standard approaches. To address this, we develop an innovative data generation method that combines a disentangled VAE with a Smoothed Bootstrap applied in the latent space. We evaluate the efficiency of this method through numerical comparisons with competitors on benchmark datasets for IR.

When Numbers Mislead Us

Believing there is a single, objective way to describe phenomena through numbers is to forget that data doesn’t “speak” on its own. Collecting data means making choices: what to measure, how, when, on whom, etc. This involves implicit (even ideological) assumptions about what counts as a measurable fact. And in any data analysis, what isn’t measured can be just as important as what is observed. When an influential variable is omitted—ignored, overlooked, or simply unknown—the apparent relationships between other variables can become misleading. This is known as the “omitted variable bias”: a hidden effect distorts comparisons and may create a correlation where there is none, or obscure a real one. Sometimes, introducing this “forgotten” variable can completely reverse conclusions drawn from a naive reading of the data. This corresponds to Simpson’s paradox.

A brief article on Simpson’s paradox, written as a book chapter (for a book that will be published in French in the Fall, or in the Winter), is now available.