Category Archives: Research

Mesurer l’équité “globale” quand les données sont dispersées

Avec Agathe Fernandes Machado, Olivier Côté et François Hu, on a mis en ligne un papier, Federated Measurement of Demographic Disparities from Quantile Sketches. Le point de départ est assez simple. On peut imaginer qu’un modèle de score (mesurant un risque de récidive, une probabilité de réadmission à l’hôpital, un score de crédit…) soit déployé dans plusieurs institutions : hôpitaux, tribunaux, banques, assureurs. Chacun collecte ses données, utilise un modèle, et conserve jalousement ses bases. Parfois par obligation légale (RGPD, secret médical), parfois par contraintes techniques, parfois par réticence organisationnelle. Le problème, c’est qu’un régulateur veut savoir si le score est discriminatoire, sans jamais centraliser les données brutes. En fait, c’est assez réaliste comme situation, beaucoup d’objectifs de justice algorithmique étant définis au niveau de la population, et pas localement. Les régulateurs et les directions conformité demandent : “Est-ce que le système, globalement, traite de la même façon les groupes protégés ?” Pas : “Chaque institution, isolément, a-t-elle l’air correcte ?” On montre dans notre article que des audits locaux peuvent être rassurants tout en étant trompeurs, parce que l’injustice peut naître précisément de ce que l’on ne voit pas en restant silo par silo. Et la bonne nouvelle, c’est qu’on peut estimer l’inéquité globale avec une communication très limitée, en demandant à chaque silo seulement des comptages et quelques quantiles de ses scores.

Dans le papier, on identifie deux sources majeures de décalage entre l’audit local et l’audit global.

Les effets de composition : une “version fairness” du paradoxe de Simpson. Même si chaque silo semble traiter les groupes de manière similaire, la répartition des groupes entre silos peut être très différente. Un groupe peut être sur-représenté dans certains hôpitaux, certains tribunaux, certaines zones géographiques… Et si ces silos n’ont pas le même profil de scores (parce que les populations, les pratiques ou les contextes diffèrent), alors l’agrégation peut créer un écart global qui n’apparaît nulle part localement.

L’hétérogénéité inter-silos : la “stratification cachée”  Deux silos peuvent produire des scores d’allure différente (distribution plus “optimiste”, plus “pessimiste”, plus dispersée…), même au sein d’un même groupe sensible. Localement, chacun peut avoir des métriques acceptables. Mais une fois les données mises bout à bout, ces différences deviennent visibles et peuvent amplifier une disparité entre groupes. Dans les domaines sensibles (santé, justice pénale), cette hétérogénéité est courante : pratiques de codage, accès aux soins, critères de triage, politiques locales…

D’un point de vue pratique, on propose un protocole d’audit en un seul aller-retour : chaque silo envoie, pour chaque groupe sensible le nombre d’individus dans ce groupe (un simple comptage), k quantiles du score (par exemple k = 25, 50 ou 100), sur une grille commune. Et c’est tout. Pas de scores individuels, pas de features, pas d’exemples. Ce genre de résumé est déjà produit par de nombreux systèmes de monitoring via des quantile sketches utilisés pour suivre des distributions. À partir de ces quantiles, le serveur peut reconstruire une approximation des distributions globales par groupe, puis calculer la disparité populationnelle. Théoriquement, l’erreur due à la discrétisation décroît comme 1/k : plus on envoie de quantiles, plus la courbe reconstruite est fine. Et on montre sur des données réelles que quelques dizaines suffisent.

En bonus, on propose aussi une méthode permettant de comprendre, si on mesure un écart, pourquoi il apparaît. On obtient en particulier une décomposition de type ANOVA qui sépare : une part due aux effets de mélange / composition (le “Simpson fairness”), une part due à la vraie hétérogénéité inter-silos (différences structurelles de score), et un terme d’interaction qui peut amplifier ou compenser (mais reste contrôlé).

Bref, on montre qu’en environnement fédéré, l’équité populationnelle n’est pas la moyenne de l’équité locale. Elle dépend des mélanges, des flux, des biais d’affectation et des variations inter-silos. Donc la bonne question n’est pas “chaque silo est-il juste ?”, mais “le système fédéré, en tant que mécanisme de production de scores, est-il juste au niveau population ?” La bonne nouvelle, c’est qu’on peut répondre à cette question sans centraliser les données, en ne partageant qu’une poignée de quantiles et des comptes, en une seule communication.

Beyond Procedure: Substantive Fairness in Conformal Prediction

Our paper, Beyond Procedure: Substantive Fairness in Conformal Prediction, with Pengqi Liu, Zijun Yu, Mouloud Belbahri, Masoud Asgharian, and Jesse Cresswell, is now available on https://arxiv.org/abs/2602.16794

Conformal prediction (CP) offers distribution-free uncertainty quantification for machine learning models, yet its interplay with fairness in downstream decision-making remains underexplored. Moving beyond CP as a standalone operation (procedural fairness), we analyze the holistic decision-making pipeline to evaluate substantive fairness-the equity of downstream outcomes. Theoretically, we derive an upper bound that decomposes prediction-set size disparity into interpretable components, clarifying how label-clustered CP helps control method-driven contributions to unfairness. To facilitate scalable empirical analysis, we introduce an LLM-in-the-loop evaluator that approximates human assessment of substantive fairness across diverse modalities. Our experiments reveal that label-clustered CP variants consistently deliver superior substantive fairness. Finally, we empirically show that equalized set sizes, rather than coverage, strongly correlate with improved substantive fairness, enabling practitioners to design more fair CP systems. Our code is available at this https URL.

Balance and Calibration of Probabilistic Scores: From GLM to Machine Learning

Tomorrow, I will give a talk on “Balance and Calibration of Probabilistic Scores:“” From GLM to Machine Learning” at Singapore campus – ESSEC Asia-Pacific. The abstract is

This study evaluates binary classifier performance with a focus on calibration, which is often overlooked by traditional metrics like accuracy. In high-stakes domains such as finance and healthcare, well-calibrated probabilities are crucial. We highlight the limitations of standard calibration metrics, particularly under score distortions and heterogeneous distributions. To address this, we introduce the Local Calibration Score and advocate optimizing models using Kullback-Leibler (KL) divergence to better align predicted scores with true probabilities. Our approach emphasizes balancing global and local calibration, ensuring overall distributional alignment while maintaining reliability across different score ranges. Using Random Forest and XGBoost across diverse datasets, we show that KL-based tuning improves calibration without sacrificing performance. Our results reveal that relying solely on traditional metrics can mislead model assessment, especially in sensitive decision-making scenarios. This is some joint work with Agathe Fernandes Machado and Ewen Gallic.

Modeling and Understanding Indirect Discrimination in Algorithmic Fairness

In a couple of days, I will give a talk on “Modeling and Understanding Indirect Discrimination in Algorithmic Fairness” at Singapore campus – ESSEC Asia-Pacific. The abstract is

Observed disparities between groups in algorithmic decisions (whether in hiring, credit approval, or risk prediction) do not necessarily imply direct discrimination. They may also stem from legitimate differences in the distribution of explanatory attributes. Understanding and quantifying which components of these gaps are “explained” versus those that reflect direct or indirect discrimination lies at the core of modern causal approaches to algorithmic fairness. This talk will begin with an accessible introduction to group-gap decomposition, building on the classical Kitagawa–Oaxaca–Blinder econometric framework. This approach separates differences attributable to observable characteristics from residual components that may signal discriminatory effects. The second part will introduce recent developments leveraging optimal transport to construct individual-level counterfactuals, enabling estimation of direct and indirect causal effects for each observation. In particular, we will show how sequential transport mappings aligned with a causal graph can disentangle pathways and quantify the contribution of each mediator. This methodology overcomes limitations of traditional linear models, introduced by Kitagawa, Oaxaca and Blinder, provides interpretable counterfactuals, and is well suited to complex empirical settings. The presentation will combine intuitive motivation, illustrative examples, and recent research insights, with the goal of making these tools accessible and useful to researchers in management science, applied economics, and data science.

Talk at NTU (Nanyang) in Singapore

Tomorrow, I will be at Nanyang Technological University to give a talk at an internal seminar, “Fairness and discrimination in insurance

What’s unique about insurance is that even statistical discrimination, which by definition is devoid of malicious intent, poses significant challenges. Because, on the one hand, policymakers would like insurers to treat their policyholders equally, without discrimination based on race, gender, age or other characteristics, even if it could make (statistical) sense to (indirectly) discriminate. On the other hand, at the core of actuaries’ activities lies discrimination, between risky and non-risky policyholders. And this risk is often statistically correlated with sensitive characteristics that regulation would like to prohibit insurers from taking into account. The analysis of possible discrimination in decision rules, whether human or algorithmic, is an old subject. Most of the concepts date back at least to the 50s, but recent developments in artificial intelligence have brought these issues back into the spotlight. Massive data facilitate statistical or proxy discrimination, and black-box algorithms do not facilitate understanding. Not to mention the various regulations that make it difficult to collect sensitive information, and ultimately test whether decisions can be discriminated against, especially indirectly.

The talk is based on the textbook Insurance, Biases, Discrimination and Fairness, as well as recent papers, arXiv:2511.11294 (AAAI’26), arXiv:2408.03425 (AAAI’25), arXiv:2309.06627 (AAAI’24) and arXiv:2306.12912  (ECML’24).

Buzy week in Singapore

It has been a buzy week at the 40th Annual AAAI Conference on Artificial Intelligence, here in Singapore where Bertille Tierny and François Hu will give talks (in the “main track”, in the “student track”, in a workshop) to present our recent work, “Decomposing Direct and Indirect Biases in Linear Models under Demographic Parity Constraint“. More to come very soon…

III Congreso Universitario Internacional sobre Seguros y Reaseguros en Perú

In a few hours, I will give a talk at the III Congreso Universitario Internacional sobre Seguros y Reaseguros en Perú,

I will give a talk on Detecting Hidden Bias in Insurance AI Through Counterfactuals

As AI becomes embedded in underwriting, pricing, fraud detection, and claims automation, one challenge remains widely underestimated: models can discriminate without ever using a prohibited variable. Indirect discrimination (i.e., bias transmitted through correlated or downstream variables) poses a subtle but critical risk for insurers, all the more since actuarial science heavily rely on model based on proxy variables. This presentation will explore how causal reasoning and counterfactual thinking can illuminate what standard machine learning methods often obscure. I will begin with intuitive economic decompositions of group disparities, then show how recent advances in optimal-transport-based counterfactuals enable us to ask: “What would this prediction have been if the sensitive attribute had been different?” Drawing on recent framework for causal mediation via sequential optimal transport , we will see how actuaries can break down total model disparities into direct and indirect components (even with complex mediators such as behavioral features, prior claims history, or categorical underwriting variables). The goal of the session is to leave the audience with a clear understanding of where hidden bias may emerge in insurance AI systems, how to diagnose it using modern causal tools, and how these insights support better governance, transparency, and compliance. A forward-looking, accessible presentation to close the day and open new perspectives on fair and responsible AI in insurance.

o (pero no daré mi charla en español) Detección de Sesgos Ocultos en la IA de Seguros con Métodos Contrafactuales

A medida que la inteligencia artificial se integra en la suscripción, la tarificación, la detección de fraude y la automatización de siniestros, surge un desafío que a menudo se subestima: los modelos pueden discriminar sin necesidad de utilizar explícitamente una variable prohibida. La discriminación indirecta —es decir, el sesgo que se transmite a través de variables correlacionadas o situadas aguas abajo— representa un riesgo sutil pero crítico para las aseguradoras, especialmente considerando que la ciencia actuarial depende fuertemente de modelos basados en variables proxy. Esta presentación explorará cómo el razonamiento causal y el pensamiento contrafactual pueden revelar aquello que los métodos tradicionales de machine learning suelen ocultar. Comenzaré con descomposiciones económicas intuitivas de las disparidades entre grupos y luego mostraré cómo los avances recientes en contrafactuales basados en transporte óptimo permiten formular la pregunta: “¿Cuál habría sido la predicción si el atributo sensible hubiera sido diferente?” Basándonos en marcos recientes de mediación causal mediante transporte óptimo secuencial, veremos cómo los actuarios pueden descomponer las disparidades totales de un modelo en componentes directos e indirectos, incluso cuando existen mediadores complejos como variables de comportamiento, historial de siniestros o características categóricas de suscripción. El objetivo de la sesión es ofrecer al público una comprensión clara de dónde pueden surgir sesgos ocultos en los sistemas de IA utilizados en seguros, cómo diagnosticarlos utilizando herramientas causales modernas, y cómo estos enfoques pueden fortalecer la gobernanza, la transparencia y el cumplimiento regulatorio. Una presentación accesible y orientada al futuro, ideal para cerrar la jornada e introducir nuevas perspectivas sobre una inteligencia artificial justa y responsable en el sector asegurador.

Talk at the JFLI, at the NII (国立情報学研究所) in Tokyo (東京)

On my way to Tokyo (東京) for a couple of days, where I was invited to give a talk by the JFLI (Japanese-French Laboratory for Informatics) at the National Institute of Informatics (国立情報学研究所), Room 1810, NII. The talk will be on “Counterfactual and Transport-Based Methods for Understanding Indirect Discrimination in Algorithmic Systems

Understanding disparities between demographic groups in algorithmic predictions remains a central challenge in responsible AI. Classical decomposition methods such as the Kitagawa–Oaxaca–Blinder framework, recently extended to nonlinear and machine-learning settings by Tierney et al. (AAAI 2026), show that observed gaps may arise either from legitimate differences in feature distributions or from structural bias. However, such aggregate decompositions provide limited insight into individual-level counterfactual behaviour. In this talk, I will present recent methodological advances that combine causal reasoning with optimal transport to characterize direct and indirect discriminatory pathways in modern predictive systems. Building on transport-based counterfactuals (Fernandes Machado et al., AAAI 2025; IJCAI 2025), we obtain individual-level counterfactual mediators that respect a given causal graph, including both continuous and categorical variables. This enables a fine-grained decomposition of model disparities into components attributable to causal pathways, beyond what is possible with standard fairness metrics or feature-removal strategies. The presentation will emphasize: the connection between decomposition-based fairness analyses and causal mediation; the construction of transport-based counterfactuals aligned with probabilistic graphical models; and applications showing how indirect discrimination can propagate through proxy variables even when sensitive features are not used. The goal is to give a concise and technically grounded overview of how optimal transport and counterfactual inference can provide interpretable tools for understanding fairness issues in machine-learning models. This talk is intended for researchers interested in causal ML, fairness analysis, and transport-based generative methods.

Decomposing Direct and Indirect Biases in Linear Models under Demographic Parity Constraint

Our paper “Decomposing Direct and Indirect Biases in Linear Models under Demographic Parity Constraint“, with Bertille Tierny and François Hu is now online on ArXiv.

Linear models are widely used in high-stakes decision-making due to their simplicity and interpretability. Yet when fairness constraints such as demographic parity are introduced, their effects on model coefficients, and thus on how predictive bias is distributed across features, remain opaque. Existing approaches on linear models often rely on strong and unrealistic assumptions, or overlook the explicit role of the sensitive attribute, limiting their practical utility for fairness assessment. We extend the work of (Chzhen and Schreuder, 2022) and (Fukuchi and Sakuma, 2023) by proposing a post-processing framework that can be applied on top of any linear model to decompose the resulting bias into direct (sensitive-attribute) and indirect (correlated-features) components. Our method analytically characterizes how demographic parity reshapes each model coefficient, including those of both sensitive and non-sensitive features. This enables a transparent, feature-level interpretation of fairness interventions and reveals how bias may persist or shift through correlated variables. Our framework requires no retraining and provides actionable insights for model auditing and mitigation. Experiments on both synthetic and real-world datasets demonstrate that our method captures fairness dynamics missed by prior work, offering a practical and interpretable tool for responsible deployment of linear models.

On sera à Singapour pour le présenter fin janvier, à AAAI 2026, 40th Annual AAAI Conference on Artificial Intelligence.

Exposé Chaire PARI, les trois piliers de l’équité en tarification

Mercredi matin (en France), après-midi (à Kyoto), nuit (à Montréal), je vais donner un exposé pour le séminaire mensuel de la chaire PARI, intitulé Un cadre de gouvernance à trois piliers pour une tarification équitable de l’assurance. J’y présenterais notre récent travail, publié justement par la Chaire PARI (document de travail 37), et également associé à un rapport publié par la Casualty Actuarial Society aux États-Unis (et présenté la semaine dernière par Olivier), A Scalable toolbox for exposing indirect discrimination in insurance rates”. Les slides sont en ligne.

Disentangled Deep Smoothed Bootstrap for Fair Imbalanced Regression

Our paper, Disentangled Deep Smoothed Bootstrap for Fair Imbalanced Regression, with Samuel Stocksieker and Denys Pommeret has been published in Procedia Computer Science

Imbalanced distribution learning is a common and significant challenge in predictive modeling, often reducing the performance of standard algorithms. Although various approaches address this issue, most are tailored to classification problems, with a limited focus on regression. This paper introduces a novel method to improve learning on tabular data within the Imbalanced Regression (IR) framework, which is a critical problem. We propose using Variational Autoencoders (VAEs) to model and define a latent representation of data distributions. However, VAEs can be inefficient with imbalanced data like other standard approaches. To address this, we develop an innovative data generation method that combines a disentangled VAE with a Smoothed Bootstrap applied in the latent space. We evaluate the efficiency of this method through numerical comparisons with competitors on benchmark datasets for IR.

KurtHGR: A Neural Maximal Correlation for Tabular Datasets

Our paper, KurtHGR: A Neural Maximal Correlation for Tabular Datasets, with Samuel Stocksieker and Denys Pommeret has been published in Procedia Computer Science

The study of dependencies between variables is a fundamental pillar of machine learning, influencing areas as diverse as feature selection, fairness, dimensionality reduction, and multimodal learning. Among nonlinear correlation measures, the Hirschfeld-Gebelein-Rényi (HGR) maximal correlation stands out for its universality and remarkable theoretical properties. Defined as the maximum achievable correlation between nonlinear transformations of two random variables, it provides an intrinsic quantification of statistical dependence, regardless of their marginal distributions. However, despite its theoretical potential, its practical adoption still faces several challenges. In this paper, we present a new approach called KurtHGR, dedicated to the estimation of the bivariate nonlinear correlation matrix of a set of variables. We show that this solution is effective in detecting nonlinear correlations, robust to noise, and computationally efficient, thanks to a neural architecture specifically designed for this purpose. We evaluate its performance through numerical illustrations and feature selection experiments, where we demonstrate that KurtHGR empirically outperforms state-of-the-art approaches.

GCKE 2025, Osaka, Annual Global Congress of Knowledge Economy

This week, I will attend the 10th Annual Global Congress of Knowledge Economy, in Osaka (大阪). On Wednesday morning, I will chair the GCKE04 session, Economic Governance & Sustainable Development. I will also give a talk on “Fairness and Insurance: Disentangling Illegitimate and Indirect Discriminations” (slides are available). It is based on recent work with Oliver and Marie-Pier Côté.

2025 CAS (Casualty Actuarial Science) Canada Connection

In less than a month, Olivier Côté will attend the  CAS Canada Connection, in Toronto.  He will speak in a session Operationalizing Fairness in Actuarial Pricing: From Principle to Practice

Fairness metrics often lack actuarial relevance and are expressed in abstract units, obscuring real-world consequences. For actuaries to intervene, proxy effects and unfair biases must be quantified in insurance-relevant terms: dollars and people. This session will present new research from the CAS Race and Insurance Pricing series, focusing on the unique challenge of establishing fairness in actuarial pricing. We argue that actuarial fairness, solidarity, and causality form the three dimensions of fairness in insurance. These give rise to a five-point spectrum of pricing benchmarks, each reflecting distinct fairness goals and trade-offs. We quantify the monetary impact of unfairness at both the policyholder and segment levels through a large-scale Québec auto insurance case study.

Learning objectives are (1) Describe three dimensions of fairness in insurance pricing: actuarial fairness, solidarity, and causality (2) Translate these dimensions of fairness into a spectrum of five pricing benchmarks (3) Diagnose and quantify potential unfairness at both individual and segment levels using actuarially meaningful metrics.

It will be based on our recent paper, A Scalable toolbox for exposing indirect discrimination in insurance rates”.