Talk in København

Tomorrow, I will give a talk at København Universiteit, on Using optimal transport to quantify and mitigate unfair insurance predictions. It is based on recent work with François Hu and Philipp Ratz (2310.20508, 2309.06627, 2306.12912 and 2306.10155).

The insurance industry is heavily reliant on predictions of risks based on characteristics of potential customers. Although the use of said models is common, researchers have long pointed out that such practices perpetuate discrimination based on sensitive features such as gender or race. Given that such discrimination can often be attributed to historical data biases, an elimination or at least mitigation is desirable. With the shift from more traditional models to machine-learning based predictions, calls for greater mitigation have grown anew, as simply excluding sensitive variables in the pricing process can be shown to be ineffective. In this talk, we first investigate why predictions are a necessity within the industry and why correcting biases is not as straightforward as simply identifying a sensitive variable. We then propose to ease the biases through the use of Wasserstein barycenters instead of simple scaling. To demonstrate the effects and effectiveness of the approach we employ it on real data and discuss its implications. The talk will be based on a recent textbook (978-3-031-49782-7) as well as work with François Hu and Philipp Ratz (2310.20508, 2309.06627, 2306.12912 and 2306.10155).

From Contemplative to Predictive Modeling

As mentioned yesterday, I gave a talk, this afternoon, entitled From Contemplative to Predictive Modeling (in actuarial science and risk management). Slides are available online, but maybe I can take some time to explain what I talked about…

It is usually claimed that actuaries build ‘predictive models’ but most of the time, what they consider would be simply ‘contemplative modeling’, in the sense that they use past information and hope that the future will be more or less the same (corresponding to the idea of generalization in machine learning). In the context of climate change (but also when modeling insurance market competition) it is not the case, data used to train models do not have the same distribution as the one we will have in the future.

Continue reading From Contemplative to Predictive Modeling

SCOR Foundation – Scope and limits of Artificial intelligence

On May 15, 2024, the SCOR Foundation for Science hosted a webinar titled “Scope and limits of Artificial intelligence”, delivered by Arthur Charpentier. A professor in the Department of Mathematics at the University of Quebec in Montreal and a member of the Institute of Actuaries, Arthur Charpentier is an internationally recognized expert in actuarial science and the author of numerous academic articles published in the best actuarial academic journals worldwide.

During the webinar, Arthur Charpentier discussed the research project “Fairness of predictive models: an application to insurance markets”, which is supported by the SCOR Foundation for Science. This project addresses biases within the automatic artificial intelligence algorithms utilized to determine optimal pricing in individual policies. Its aim is to mitigate or eliminate such biases, which could lead to inequities or discriminatory practices based on factors such as gender, race, religion, or origin in the coverage provided by insurers or reinsurers to policyholders.

Trip in (Northern) Europe

The next two weeks, in will be in (Northern) Europe, with a first stop in Brussels (to visit colleagues), then in Leuven (I will give a talk on Monday at KU Leuven), then in København (I will give a talk on Friday at Københavns Universitet), and finally in Stockholm (at Stockholm University, for the Insurance Data Science conference).

In the Fall, I will be in Europe, with Lisbon (European Actuarial Journal conference), in France (Cerisy Colloques) and in Warsaw in Poland. In Poland, I will give a two day cours on Insurance, Biases, Discrimination and Fairness

More to come soon…

Quantifying Fairness and Discrimination in Predictive Models

The article Quantifying Fairness and Discrimination in Predictive Models was just published in Machine Learning for Econometrics and Related Topics, Springer.

The analysis of discrimination has long interested economists and lawyers. In recent years, the literature in computer science and machine learning has become interested in the subject, offering an interesting re-reading of the topic. These questions are the consequences of numerous criticisms of algorithms used to translate texts or to identify people in images. With the arrival of massive data, and the use of increasingly opaque algorithms, it is not surprising to have discriminatory algorithms, because it has become easy to have a proxy of a sensitive variable, by enriching the data indefinitely. According to [69], “technology is neither good nor bad, nor is it neutral”, and therefore, “machine learning won’t give you anything like gender neutrality ‘for free’ that you didn’t explicitely ask for”, as claimed by [61]. In this article, we will come back to the general context, for predictive models in classification. We will present the main concepts of fairness, called group fairness, based on independence between the sensitive variable and the prediction, possibly conditioned on this or that information. We will finish by going further, by presenting the concepts of individual fairness. Finally, we will see how to correct a potential discrimination, in order to guarantee that a model is more ethical.

Assurabilité, vers de nouveaux partages de risque, Congrès des Actuaires

Jeudi, je vais participer, à distance, au 23e congrès de l’Institut des Actuaires, en France, avec Florence Picard et Laurence Barry.

Notre exposé a pour titre “assurabilité : vers de nouveaux partages de risques?”. Je parlerais un peu des catastrophes naturelles… et du risque de sécheresse, ou plutôt du risque “RGA“.

Continue reading Assurabilité, vers de nouveaux partages de risque, Congrès des Actuaires

Workshop on Trustworthy AI, in Montreal

This Monday, a Workshop on Trustworthy AI will be held May 27, 2024 in Montreal.

We will be there with Agathe and Olivier, to chat with people who might have some interest

Here are our posters. I wil talk about discrimination and insurance

Agathe will explain why callibration of scores is important,

and finally, Olivier will talk about building (causal) graphs for fairness

 

Présentation à Bordeaux, Journées de Statistique

Cette semaine, Sam – Samuel Stocksieker – sera à Bordeaux, aux journées de statistiques, pour parler “smoothed bootstrap” et génération de données synthétiques pour la modélisation des extrêmes (papier co-écrit avec Denys Pommeret).

En apprentissage supervisé, il est assez fréquent de se retrouver confronté à des données présentant des distributions déséquilibrées. Cette situation entraîne souvent une difficulté d’apprentissage pour les algorithmes standards. La recherche et les solutions en matière d’apprentissage à partir de distributions déséquilibrées se sont principalement concentrées sur les tâches de classification. Malgré son importance, très peu de solutions existent pour la régression déséquilibrée (Imbalanced Regression). Dans cet article, nous proposons une procédure d’augmentation de données, nommée DENIS, basée sur des estimations à noyau de densité. Cette approche fournit une expression des densités conditionnelles des générateurs. Nous appliquons DENIS en régression déséquilibrée et proposons de le combiner à une nouveau type de générateur de type wild-boostrap pour simuler la variable cible, conditionnellement aux nouvelles données synthétiques. Nous évaluons les performances de l’algorithme DENIS dans des situations de régression déséquilibrée. Nous évaluons empiriquement et comparons notre approche et démontrons une amélioration significative par rapport aux techniques existantes.

Fresh from the oven…

14 litres d’encre de chine, 30 pinceaux, 62 crayons à mine grasse, 1 crayon à mine dure, 27 gommes à effacer, 38 kilos de papier, 16 rubans de machine à écrire, 2 machines à écrire, 67 litres de bière ont été nécessaires à la réalisation de cette aventure…

(Goscinny and Uderzo (1965*), Astérix et Cléopâtre)

Almost better than hot, freshly baked bagels…

the textbook Insurance, Biases, Discrimination and Fairness is now out, and just arrived today ! Even though I’ve spent so much time re-reading it, getting nauseous, checking references, quotes, reworking graphics, re-launching codes, etc., it’s still an immense feeling of pride to open your book for the very first time.

Astérix et Cléopâtre est le dernier Astérix de la fameuse Collection Pilote, comme me le rappelait Michel Bera (professeur émérite du CNAM, rattaché à la Chaire de modélisation statistique du risque, et mémoire de la bande dessinée francophone, le “B” du fameux “BDM”, Trésors de la bande dessinée). “Lorsque la collection Pilote a basculé en éditions avec les titres des seuls Asterix dans le menhir, je pense que la phrase a disparu“… C’était la version qui était chez mes grands parents, et que je (re)dévorais, tous les ans, quand j’étais petit.

"sendo l'intento mio scrivere cosa utile a chi la intende…"

Search OpenEdition Search

You will be redirected to OpenEdition Search