Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Probabilistic Scores of Classifiers, Calibration is not Enough

Our paper “Probabilistic Scores of Classifiers, Calibration is not Enough”, with Agathe Fernandes Machado, Emmanuel Flachaire, Ewen Gallic and François Hu is now available on https://arxiv.org/abs/2408.03421

In binary classification tasks, accurate representation of probabilistic predictions is essential for various real-world applications such as predicting payment defaults or assessing medical risks. The model must then be well-calibrated to ensure alignment between predicted probabilities and actual outcomes. However, when score heterogeneity deviates from the underlying data probability distribution, traditional calibration metrics lose reliability, failing to align score distribution with actual probabilities. In this study, we highlight approaches that prioritize optimizing the alignment between predicted scores and true probability distributions over minimizing traditional performance or calibration metrics. When employing tree-based models such as Random Forest and XGBoost, our analysis emphasizes the flexibility these models offer in tuning hyperparameters to minimize the Kullback-Leibler (KL) divergence between predicted and true distributions. Through extensive empirical analysis across 10 UCI datasets and simulations, we demonstrate that optimizing tree-based models based on KL divergence yields superior alignment between predicted scores and actual probabilities without significant performance loss. In real-world scenarios, the reference probability is determined a priori as a Beta distribution estimated through maximum likelihood. Conversely, minimizing traditional calibration metrics may lead to suboptimal results, characterized by notable performance declines and inferior KL values. Our findings reveal limitations in traditional calibration metrics, which could undermine the reliability of predictive models for critical decision-making.

Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness

Our paper “Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness“, written with Agathe Fernandes Machado and Ewen Gallic, is now online

In this paper, we link two existing approaches to derive counterfactuals: adaptations based on a causal graph, as suggested in Plečko and Meinshausen (2020) and optimal transport, as in De Lara et al. (2024). We extend “Knothe’s rearrangement” Bonnotte (2013) and “triangular transport” Zech and Marzouk (2022) to probabilistic graphical models, and use this counterfactual approach, referred to as sequential transport, to discuss individual fairness. After establishing the theoretical foundations of the proposed method, we demonstrate its application through numerical experiments on both synthetic and real datasets.

Bayesian Improved Surname Geocoding to predict “Race” in the U.S.

After Florent last week, this Wednesday, Ana (Patrón Piñerez) will give a talk to conclude her internship in Montréal (supervised with Agathe), on Bayesian Improved Surname Geocoding to predict “Race” in the U.S.

This study focuses on predicting race within the United States, a topic of significant sensitivity due to legal prohibitions against discrimination based on ‘race, color, or previous condition of servitude’ (Civil Rights Act of 1866, 1964). At the same time, due to the rising prevalence of big data systems, insurers are increasingly required to adhere to regulations such as Colorado SB21-169, which mandate demonstrating non-discriminatory practices toward sensitive attributes such as race. In this context, our research explores various methodologies for race prediction. We begin by examining pre-Bayesian approaches that utilize geolocation and surname data, progressing to Bayesian methods that integrate these sources. Specifically, we discuss Bayesian Improved Surname Geocoding (BISG) and its adaptations. Challenges associated with these techniques, such as handling zero counts for minorities and data scarcity, are also addressed. Finally, we propose a novel strategy known as nested dichotomies, rooted in the BISG algorithm. Unlike traditional multiclass prediction, this approach involves sequential binomial predictions structured within a nested framework.

Measuring and mitigating biases in motor insurance pricing

Our paper, with Mulah Moriah and Franck Vermet, Measuring and mitigating biases in motor insurance pricing, has been recenlty published in the European Actuarial Journal,

The non-life insurance sector operates within a highly competitive and tightly regulated framework, confronting a pivotal juncture in the formulation of pricing strategies. Insurers are compelled to harness a range of statistical methodologies and available data to construct optimal pricing structures that align with the overarching corporate strategy while accommodating the dynamics of market competition. Given the fundamental societal role played by insurance, premium rates are subject to rigorous scrutiny by regulatory authorities. Consequently, the act of pricing transcends mere statistical calculations and carries the weight of strategic and societal factors. These multifaceted concerns may drive insurers to establish equitable premiums, considering various variables. For instance, regulations mandate the provision of equitable premiums, considering factors such as policyholder gender. Or mutualist groups in accordance with respective corporate strategies can implement age-based premium fairness. In certain insurance domains, the presence of serious illnesses or disabilities are emerging as new dimensions for evaluating fairness. Regardless of the motivating factor prompting an insurer to adopt fairer pricing strategies for a specific variable, the insurer must possess the capability to define, measure, and ultimately mitigate any fairness biases inherent in its pricing practices while upholding standards of consistency and performance. This study seeks to provide a comprehensive set of tools for these endeavors and assess their effectiveness through practical application in the context of automobile insurance. Results show that fairness bias can be found in historical data and models, and that fairer outcomes can be obtained by more fairness-aware approaches.

Exploration des Techniques de Transfert d’Apprentissage sur des Données Climatiques

L’été avance. Florent, qui est en stage depuis presque 4 mois, va faire un exposé demain au séminaire d’été des étudiants, sur le thème “Exploration des Techniques de Transfert d’Apprentissage sur des Données Climatiques”

Le transfert d’apprentissage (ou Transfer Learning) représente une avancée notable dans le domaine de l’intelligence artificielle. Cette technique repose sur l’idée que lorsqu’un modèle est bien entraîné et performant dans un contexte donné, on peut utiliser ce savoir-faire dans un autre contexte, différent mais lié, pour améliorer la modélisation, notamment lorsqu’on dispose de peu d’informations sur ce dernier. Cependant, les performances des modèles de transfert d’apprentissage varient, et le choix de transférer la connaissance n’est pas toujours optimal. La principale difficulté réside dans la capacité à garantir les performances du transfert afin de justifier ce choix. Cette approche pourrait s’avérer particulièrement précieuse dans le contexte de l’analyse des données climatiques. En effet, les modèles utilisés dans divers secteurs pourraient bénéficier des techniques avancées de transfert d’apprentissage pour améliorer la précision des prévisions et optimiser les stratégies d’adaptation et d’atténuation du changement climatique. De plus, le transfert d’apprentissage permet de réutiliser des modèles existants, réduisant ainsi les coûts temporels et écologiques liés à l’entraînement de nouveaux modèles. Cette méthode pourrait rendre l’analyse climatique plus efficace et durable, tout en économisant des ressources précieuses.

Pour l’anecdote, il y a presque 5 ans, je donnais un exposé sur Paris “The Challenge of Predictive Models (for Rare Events)” (les slides sont toujours en ligne). Mes centres d’intérêt n’ont pas trop évolués,

Some updates about the insurance datasets package (CASdataset)

Ten years ago, Computational Actuarial Science with R was published. With Christophe Dutang, we created at the same time an R package, collecting datasets used in the book. It was mainly to give access to the datasets to reproduce the applications, since functions used in the different chapters were coming from other R packages. Then, we started adding more and more datasets, not used in the book, but that could be used by researchers and students. We are quite happy to see that those datasets are now considered as a benchmark in actuarial and insurance litterature (and also outside the community, actually).

The maintenance was a bit complicated since it was not possible to be hosted by the CRAN (Comprehensive R Archive Network), so it was either on Christophe’s github repo, or on a dedicated website at UQAM. Christophe’s repo

https://dutangc.github.io/CASdatasets/

is under construction (or major refreshing, with Ewen Gallic), and several vignettes will be added, created by ). Actually, we encourage colleagues, or students, who used datasets from the package to share some codes, we can now host the application. And there is also the following repository,

https://entrepot.recherche.data.gouv.fr/

Hence, the dataset has now an official DOI, which makes it easier to cite doi:10.57745/P0KHAG. And  the following bib file can be obtained,

@data{P0KHAG_2024,
author = {Dutang, Christophe and Charpentier, Arthur},
publisher = {Recherche Data Gouv},
title = {{Insurance dataset}},
year = {2024},
version = {V1},
doi = {10.57745/P0KHAG},
url = {https://doi.org/10.57745/P0KHAG}
}

Talk at the 27th International Congress on Insurance: Mathematics and Economics

On Wednesday morning, I will be chairing our session “Discrimination-free Insurance Pricing” at the Insurance: Mathematics & Insurance Conference, in Chicago. With Olivier Côté, Lydia Gabric and Hong Beng Lim, we will be four speaker, just before lunch time. My talk will be a mix of recent work on quantifying and mitigating discrimination in scores (in insurance). Slides are available online.

 

Talk on collaborative insurance, unfairness and discrimination

Monday, I will be giving a short course at the workshop on Decentralized Insurance and Risk Sharing (SAC 161), in Chicago

  • Decentralized Finance and Blockchain: Implications for the Insurance Industry, by Marco Mirabella
  • Decentralized risk sharing: definitions, properties, and characterizations, by Jan Dhaene
  • Collaborative insurance, unfairness, and discrimination, by Arthur Charpentier
  • Decentralized insurance: bridging the gap between industry practice and academic theory, by Runhuan Feng

My slides are available online.

In this course, we will get back to mathematical properties of risk sharing on networks, with reciprocal contrats. We will discuss conditions about stochastic dominance, proving that policyholers might have interest in sharing risks with “friends”.
Then, we will try to adress fairness issues, for such risk sharing mechanisms. If fairness has been recently intensively studied, either through group or individual fairness, there are yet not much litterature about fairness on networks. It is important to adress those issues, since perceived discrimination is usually associated with networks. We will see why the topology of the network is important, both to design peer-to-peer schemes to share risks, but also to see if perceived discrimination is associated with global disparate treatement.

Insurance, Biases, Discrimination and Fairness

Insurance, Biases, Discrimination and Fairness was published a few weeks ago. I still plan to spend some time this summer on the R package, including data and some functions…

This book offers an introduction to the technical foundations of discrimination and equity issues in insurance models, catering to undergraduates, postgraduates, and practitioners. It is a self-contained resource, accessible to those with a basic understanding of probability and statistics. Designed as both a reference guide and a means to develop fairer models, the book acknowledges the complexity and ambiguity surrounding the question of discrimination in insurance. In insurance, proposing differentiated premiums that accurately reflect policyholders’ true risk—termed “actuarial fairness” or “legitimate discrimination”—is economically and ethically motivated. However, such segmentation can appear discriminatory from a legal perspective. By intertwining real-life examples with academic models, the book incorporates diverse perspectives from philosophy, social sciences, economics, mathematics, and computer science. Although discrimination has long been a subject of inquiry in economics and philosophy, it has gained renewed prominence in the context of “big data,” with an abundance of proxy variables capturing sensitive attributes, and “artificial intelligence” or specifically “machine learning” techniques, which often involve less interpretable black box algorithms.

The book distinguishes between models and data to enhance our comprehension of why a model may appear unfair. It reminds us that while a model may not be inherently good or bad, it is never neutral and often represents a formalization of a world seen through potentially biased data. Furthermore, the book equips actuaries with technical tools to quantify and mitigate potential discrimination, featuring dedicated chapters that investigate into these methods.

"sendo l'intento mio scrivere cosa utile a chi la intende…"