Bayesian Improved Surname Geocoding to predict “Race” in the U.S.

After Florent last week, this Wednesday, Ana (Patrón Piñerez) will give a talk to conclude her internship in Montréal (supervised with Agathe), on Bayesian Improved Surname Geocoding to predict “Race” in the U.S.

This study focuses on predicting race within the United States, a topic of significant sensitivity due to legal prohibitions against discrimination based on ‘race, color, or previous condition of servitude’ (Civil Rights Act of 1866, 1964). At the same time, due to the rising prevalence of big data systems, insurers are increasingly required to adhere to regulations such as Colorado SB21-169, which mandate demonstrating non-discriminatory practices toward sensitive attributes such as race. In this context, our research explores various methodologies for race prediction. We begin by examining pre-Bayesian approaches that utilize geolocation and surname data, progressing to Bayesian methods that integrate these sources. Specifically, we discuss Bayesian Improved Surname Geocoding (BISG) and its adaptations. Challenges associated with these techniques, such as handling zero counts for minorities and data scarcity, are also addressed. Finally, we propose a novel strategy known as nested dichotomies, rooted in the BISG algorithm. Unlike traditional multiclass prediction, this approach involves sequential binomial predictions structured within a nested framework.

Measuring and mitigating biases in motor insurance pricing

Our paper, with Mulah Moriah and Franck Vermet, Measuring and mitigating biases in motor insurance pricing, has been recenlty published in the European Actuarial Journal,

The non-life insurance sector operates within a highly competitive and tightly regulated framework, confronting a pivotal juncture in the formulation of pricing strategies. Insurers are compelled to harness a range of statistical methodologies and available data to construct optimal pricing structures that align with the overarching corporate strategy while accommodating the dynamics of market competition. Given the fundamental societal role played by insurance, premium rates are subject to rigorous scrutiny by regulatory authorities. Consequently, the act of pricing transcends mere statistical calculations and carries the weight of strategic and societal factors. These multifaceted concerns may drive insurers to establish equitable premiums, considering various variables. For instance, regulations mandate the provision of equitable premiums, considering factors such as policyholder gender. Or mutualist groups in accordance with respective corporate strategies can implement age-based premium fairness. In certain insurance domains, the presence of serious illnesses or disabilities are emerging as new dimensions for evaluating fairness. Regardless of the motivating factor prompting an insurer to adopt fairer pricing strategies for a specific variable, the insurer must possess the capability to define, measure, and ultimately mitigate any fairness biases inherent in its pricing practices while upholding standards of consistency and performance. This study seeks to provide a comprehensive set of tools for these endeavors and assess their effectiveness through practical application in the context of automobile insurance. Results show that fairness bias can be found in historical data and models, and that fairer outcomes can be obtained by more fairness-aware approaches.

Exploration des Techniques de Transfert d’Apprentissage sur des Données Climatiques

L’été avance. Florent, qui est en stage depuis presque 4 mois, va faire un exposé demain au séminaire d’été des étudiants, sur le thème “Exploration des Techniques de Transfert d’Apprentissage sur des Données Climatiques”

Le transfert d’apprentissage (ou Transfer Learning) représente une avancée notable dans le domaine de l’intelligence artificielle. Cette technique repose sur l’idée que lorsqu’un modèle est bien entraîné et performant dans un contexte donné, on peut utiliser ce savoir-faire dans un autre contexte, différent mais lié, pour améliorer la modélisation, notamment lorsqu’on dispose de peu d’informations sur ce dernier. Cependant, les performances des modèles de transfert d’apprentissage varient, et le choix de transférer la connaissance n’est pas toujours optimal. La principale difficulté réside dans la capacité à garantir les performances du transfert afin de justifier ce choix. Cette approche pourrait s’avérer particulièrement précieuse dans le contexte de l’analyse des données climatiques. En effet, les modèles utilisés dans divers secteurs pourraient bénéficier des techniques avancées de transfert d’apprentissage pour améliorer la précision des prévisions et optimiser les stratégies d’adaptation et d’atténuation du changement climatique. De plus, le transfert d’apprentissage permet de réutiliser des modèles existants, réduisant ainsi les coûts temporels et écologiques liés à l’entraînement de nouveaux modèles. Cette méthode pourrait rendre l’analyse climatique plus efficace et durable, tout en économisant des ressources précieuses.

Pour l’anecdote, il y a presque 5 ans, je donnais un exposé sur Paris “The Challenge of Predictive Models (for Rare Events)” (les slides sont toujours en ligne). Mes centres d’intérêt n’ont pas trop évolués,

Some updates about the insurance datasets package (CASdataset)

Ten years ago, Computational Actuarial Science with R was published. With Christophe Dutang, we created at the same time an R package, collecting datasets used in the book. It was mainly to give access to the datasets to reproduce the applications, since functions used in the different chapters were coming from other R packages. Then, we started adding more and more datasets, not used in the book, but that could be used by researchers and students. We are quite happy to see that those datasets are now considered as a benchmark in actuarial and insurance litterature (and also outside the community, actually).

The maintenance was a bit complicated since it was not possible to be hosted by the CRAN (Comprehensive R Archive Network), so it was either on Christophe’s github repo, or on a dedicated website at UQAM. Christophe’s repo

https://dutangc.github.io/CASdatasets/

is under construction (or major refreshing, with Ewen Gallic), and several vignettes will be added, created by ). Actually, we encourage colleagues, or students, who used datasets from the package to share some codes, we can now host the application. And there is also the following repository,

https://entrepot.recherche.data.gouv.fr/

Hence, the dataset has now an official DOI, which makes it easier to cite doi:10.57745/P0KHAG. And  the following bib file can be obtained,

@data{P0KHAG_2024,
author = {Dutang, Christophe and Charpentier, Arthur},
publisher = {Recherche Data Gouv},
title = {{Insurance dataset}},
year = {2024},
version = {V1},
doi = {10.57745/P0KHAG},
url = {https://doi.org/10.57745/P0KHAG}
}

Talk at the 27th International Congress on Insurance: Mathematics and Economics

On Wednesday morning, I will be chairing our session “Discrimination-free Insurance Pricing” at the Insurance: Mathematics & Insurance Conference, in Chicago. With Olivier Côté, Lydia Gabric and Hong Beng Lim, we will be four speaker, just before lunch time. My talk will be a mix of recent work on quantifying and mitigating discrimination in scores (in insurance). Slides are available online.

 

Talk on collaborative insurance, unfairness and discrimination

Monday, I will be giving a short course at the workshop on Decentralized Insurance and Risk Sharing (SAC 161), in Chicago

  • Decentralized Finance and Blockchain: Implications for the Insurance Industry, by Marco Mirabella
  • Decentralized risk sharing: definitions, properties, and characterizations, by Jan Dhaene
  • Collaborative insurance, unfairness, and discrimination, by Arthur Charpentier
  • Decentralized insurance: bridging the gap between industry practice and academic theory, by Runhuan Feng

My slides are available online.

In this course, we will get back to mathematical properties of risk sharing on networks, with reciprocal contrats. We will discuss conditions about stochastic dominance, proving that policyholers might have interest in sharing risks with “friends”.
Then, we will try to adress fairness issues, for such risk sharing mechanisms. If fairness has been recently intensively studied, either through group or individual fairness, there are yet not much litterature about fairness on networks. It is important to adress those issues, since perceived discrimination is usually associated with networks. We will see why the topology of the network is important, both to design peer-to-peer schemes to share risks, but also to see if perceived discrimination is associated with global disparate treatement.

Insurance, Biases, Discrimination and Fairness

Insurance, Biases, Discrimination and Fairness was published a few weeks ago. I still plan to spend some time this summer on the R package, including data and some functions…

This book offers an introduction to the technical foundations of discrimination and equity issues in insurance models, catering to undergraduates, postgraduates, and practitioners. It is a self-contained resource, accessible to those with a basic understanding of probability and statistics. Designed as both a reference guide and a means to develop fairer models, the book acknowledges the complexity and ambiguity surrounding the question of discrimination in insurance. In insurance, proposing differentiated premiums that accurately reflect policyholders’ true risk—termed “actuarial fairness” or “legitimate discrimination”—is economically and ethically motivated. However, such segmentation can appear discriminatory from a legal perspective. By intertwining real-life examples with academic models, the book incorporates diverse perspectives from philosophy, social sciences, economics, mathematics, and computer science. Although discrimination has long been a subject of inquiry in economics and philosophy, it has gained renewed prominence in the context of “big data,” with an abundance of proxy variables capturing sensitive attributes, and “artificial intelligence” or specifically “machine learning” techniques, which often involve less interpretable black box algorithms.

The book distinguishes between models and data to enhance our comprehension of why a model may appear unfair. It reminds us that while a model may not be inherently good or bad, it is never neutral and often represents a formalization of a world seen through potentially biased data. Furthermore, the book equips actuaries with technical tools to quantify and mitigate potential discrimination, featuring dedicated chapters that investigate into these methods.

Samuel が横浜に到着

After defending his PhD last week, Samuel just arrived in 横浜市, at the International Joint Conference on Neural Networks (IJCNN’24), that will take place at the IEEE World Congress on Computational Intelligence (WCCI).

He will present our recent work on Boarding for ISS: Imbalanced Self-Supervised Discovery of a Scaled Autoencoder for Mixed Tabular Datasets,

The field of imbalanced self-supervised learning, especially in the context of tabular data, has not been extensively studied. Existing research has predominantly focused on image datasets. This paper aims to fill this gap by examining the specific challenges posed by data imbalance in self-supervised learning in the domain of tabular data, with a primary focus on autoencoders. Autoencoders are widely employed for learning and constructing a new representation of a dataset, particularly for dimensionality reduction. They are also often used for generative model learning, as seen in variational autoencoders. When dealing with mixed tabular data, qualitative variables are often encoded using a one-hot encoder with a standard loss function (MSE or Cross Entropy). In this paper, we analyze the drawbacks of this approach, especially when categorical variables are imbalanced. We propose a novel metric to balance learning: a Multi-Supervised Balanced MSE. This approach reduces the reconstruction error by balancing the influence of variables. Finally, we empirically demonstrate that this new metric, compared to the standard MSE: i) outperforms when the dataset is imbalanced, especially when the learning process is insufficient, and ii) provides similar results in the opposite case.