Assurabilité, vers de nouveaux partages de risque, Congrès des Actuaires

Jeudi, je vais participer, à distance, au 23e congrès de l’Institut des Actuaires, en France, avec Florence Picard et Laurence Barry.

Notre exposé a pour titre “assurabilité : vers de nouveaux partages de risques?”. Je parlerais un peu des catastrophes naturelles… et du risque de sécheresse, ou plutôt du risque “RGA“.

Continue reading Assurabilité, vers de nouveaux partages de risque, Congrès des Actuaires

Workshop on Trustworthy AI, in Montreal

This Monday, a Workshop on Trustworthy AI will be held May 27, 2024 in Montreal.

We will be there with Agathe and Olivier, to chat with people who might have some interest

Here are our posters. I wil talk about discrimination and insurance

Agathe will explain why callibration of scores is important,

and finally, Olivier will talk about building (causal) graphs for fairness

 

Présentation à Bordeaux, Journées de Statistique

Cette semaine, Sam – Samuel Stocksieker – sera à Bordeaux, aux journées de statistiques, pour parler “smoothed bootstrap” et génération de données synthétiques pour la modélisation des extrêmes (papier co-écrit avec Denys Pommeret).

En apprentissage supervisé, il est assez fréquent de se retrouver confronté à des données présentant des distributions déséquilibrées. Cette situation entraîne souvent une difficulté d’apprentissage pour les algorithmes standards. La recherche et les solutions en matière d’apprentissage à partir de distributions déséquilibrées se sont principalement concentrées sur les tâches de classification. Malgré son importance, très peu de solutions existent pour la régression déséquilibrée (Imbalanced Regression). Dans cet article, nous proposons une procédure d’augmentation de données, nommée DENIS, basée sur des estimations à noyau de densité. Cette approche fournit une expression des densités conditionnelles des générateurs. Nous appliquons DENIS en régression déséquilibrée et proposons de le combiner à une nouveau type de générateur de type wild-boostrap pour simuler la variable cible, conditionnellement aux nouvelles données synthétiques. Nous évaluons les performances de l’algorithme DENIS dans des situations de régression déséquilibrée. Nous évaluons empiriquement et comparons notre approche et démontrons une amélioration significative par rapport aux techniques existantes.

Fresh from the oven…

14 litres d’encre de chine, 30 pinceaux, 62 crayons à mine grasse, 1 crayon à mine dure, 27 gommes à effacer, 38 kilos de papier, 16 rubans de machine à écrire, 2 machines à écrire, 67 litres de bière ont été nécessaires à la réalisation de cette aventure…

(Goscinny and Uderzo (1965*), Astérix et Cléopâtre)

Almost better than hot, freshly baked bagels…

the textbook Insurance, Biases, Discrimination and Fairness is now out, and just arrived today ! Even though I’ve spent so much time re-reading it, getting nauseous, checking references, quotes, reworking graphics, re-launching codes, etc., it’s still an immense feeling of pride to open your book for the very first time.

Astérix et Cléopâtre est le dernier Astérix de la fameuse Collection Pilote, comme me le rappelait Michel Bera (professeur émérite du CNAM, rattaché à la Chaire de modélisation statistique du risque, et mémoire de la bande dessinée francophone, le “B” du fameux “BDM”, Trésors de la bande dessinée). “Lorsque la collection Pilote a basculé en éditions avec les titres des seuls Asterix dans le menhir, je pense que la phrase a disparu“… C’était la version qui était chez mes grands parents, et que je (re)dévorais, tous les ans, quand j’étais petit.

What a day…

The Second Workshop on Fairness and Discrimination in Insurance 2024, in Québec was a great success, thanks to the amazing speakers (Fei Huang (UNSW Sidney), David Schraub (Chicago Actuarial Association), Emmanuel Hamel (Autorité des marchés financiers), Laurence Barry (Chaire PARI), Agathe Fernandes Machado (UQÀM), Mallika Bender (Casualty Actuarial Society), Christopher Cooney (TD Insurance) and Olivier Côté (Université Laval)), a great audience, that did stay the entiere day in the class, and a lot of coffee !

“Scope and limits of artificial intelligence” at the SCOR foundation monthly webinar

This morning, I will give a talk on “scope and limits of artificial intelligence” at the SCOR foundation monthly webinar. As discussed previsously, we currently have ongoing research on discrimination and fairness founded by the fondation (newsletter #1 is online).

Insurance (and further motivations)

Since we will talk about fairness, I will start with a couple of motivations. The first one is about COMPAS,

Interestingly, we have the data to analyse that one. In the original analysis, conditional on non-re-offending, proportions of being wrongly classified in the two protected groups are significantly different, so the algorithm is racist,

The answer was that actually, conditional on being classified as high risk, the probability of re-offense in the two protected groups are significantly similar, so the algorithm is not racist,

So clearly, we can start to see that it will not be so easy, since using the same data and the same models, two different conclusions can be obtained.

We will also disccuss legal aspects.

This idea of “determining actuarial factor” has been remove in Europe, but we can still find it in Québec

I can also mention some recent projects, in Colorado, where insurers are asked  to predict race and ethnicity (that specific topic is on our agenda for the summer)

And finally, I should stress that discrimination has not much to do with the intention of the statistician. This is the idea of indirect discrimination

I should also mention “redlining“. About 100 years ago, in the US, we started to see maps, created by HOLC (based on City Survey Files, 1935-1940). Those maps contained “red” areas and “green” areas. Bankers were supposed to avoid the red areas, because they were considered too risky.

As a sidenote, we see nowadays some blue-lining related to climate risks,

“Blue-lining,” from the consumer’s perspective, is when banks or mortgage lenders draw lines of risk around certain streets or neighborhoods, often without clear disclosure.

Finally, I just want to recall that algorithms just tend to reproduce what can be observed in data. If there is a difference between men and women, they will reproduce it.

A bit more on insurance

I should also stress an important problem (that could be related to a paper we wrote, in French, a few years ago). Classically, when modeling categorical variables, such as a binary variable y\in\{0,1\}, practitionners just care about getting the good category. On the left, we have pictures of cats and dogs to train a model, then we try on a new picture, that is either a cat or a dog. Somehow, there is a ground truth and it is possible to see if we are right or wrong. Same if we want to detect a disease on medical pictures. Now, if we move to the right. In the middle, we have a model that predicts if it will rain, or not. But here, maybe, what we care about is actually the probability to have rain. On the right, we have the actuarial problem of modeling claims frequencies. We do not want to predict who will claim a loss, but we want a good estimator of the probability to claim a loss. The challenge, clearly, is that we cannot observe that one. We cannot observe the latent risk factor. We only observe if people got an accident or not. But some people with a very small probability can still claim a loss. And very bad drivers can actually be very lucky, and got no accident one year.

Again, in insurance, we care more about the score, the estimation of the probability than the class \widehat{y}. So we can slightly modify standard fairness definitions, to be based not on predicted classes \widehat{y}, but on the score m(\boldsymbol{x},s). As we will discuss, there are usually three general definitions of so-called “group fairness”

Quantifying unfairness with optimal transport

Let us start with demographic parity. A weak version is that, on average, scores in the two groups should identical (or close). An alternative is the strong version, asking for equalities in distributions : for any set \mathcal{I}\subset[0,1], the probability that the score is in \mathcal{I} (e.g. between 40% and 60%) should be the same in the two groups.

Mathematically, we need a distance between the distributions of scores in the two groups. And a popular distance is Wasserstein distance, that is related to optimal transport.

The empirical version is perhaps easier to understand, and mapping is based on matching of individuals. xxx

As a cultural sidenote, a couple of slides to explain why it has to do with “optimal transport”, going back to Monge (1781)‘s problem. It’s all about transporting the sand, grain by grain, from the hole to the pile. Below, we have a (purely) random transport. Which is not efficient at all…

and then the optimal version (for a strictly convex cost function), he leftmost grain in the hole goes on the lefttmost part of the stack, etc.

Mitigation

For mitigation (once we have observe that there was discrimination, as discussed previously) heuristically, we want to be somewhere in between the two distributions in the two subgroups,

Being “in between” can be interpreted locally: for someone in group A, it should be between (weights are related to proportions in the two groups) the prediction, as someone in group A, and then some sort of counterfactual in the other group, namely the prediction that person would have obtained if she had been in group B, based on the same probability level,

For the other group it is the opposite

Beyond demographic parity

If we get back to our COMPAS examples, demographic parity, in the standard classification-based definition, would be translated as

If we get back to the original motivation we gave, it had nothing to do with demographic parity, the first slide had to do with separation, or equalized odds, while the second one had to do with sufficiency, or calibration.

More generally, if we consider a weak version of the independence criterias, we have moments equality, within each protected subgroup,

Let us mention a bit more calibration. Calibration is deeply related to the interpretation of “probabilities” as returned by models as “real probabilities”. In machine learning, it is hard to define properly what those “probabilities” are.

Calibration is related to the following idea, discussed above: if we consider all cases where the predicted probability was 40% (or say, close to 40%), then the proportion on 1’s should be close to 40%.

To conclude that disgression, I can mention the following example highlighting that we should be concerned by probabilities returned by machine learning algorithms. Consider some pictures, generated by some algorithm, and more precisely, some flow of pictures, from a woman to a man

Below, we can see probabilities given by some online appplication, that returns probabilities to be a woman, given a picture. Can’t we agree that it is surprising that those probabilities (of beeing a woman) do not decrease continuous, from the picture in the top left corner and the one in the bottom right one ?

Finally, I can also mention “individual fairness”, or “counterfactual fairness”. Here also, optimal transport can be used, to quantify counterfactual unfairness. But I won’t be too long here.

Finally, an opening for next year’s agenda, with interpretability. Interpretability is a very important issue in actuarial science, which is not as objective as people might think, and the popular

let the data speak for itself

In insurance, interpretation is very important, probably more important than model assumptions

Interpretation become a key concept when dealing with multiple sensitive attributes

To conclude, just a final reminder that dealing with mitigation is a complex philosophical problem….

Tomorrow, we will discuss further at our workshop, in Québec city

Rapport Langreney : lutter contre le désengagement des assureurs dans la couverture des risques climatiques

Le rapport « Adapter le système assurantiel français face à l’évolution des risques climatiques », co-écrit par Thierry Langreney, Gonéri Le Cozannet et Myriam Mérad a été rendu public le 2 avril 2024. Il contient trente-sept propositions organisées en neuf objectifs, dont le but est de réagencer la responsabilité des différents acteurs que sont les pouvoirs publics, les assureurs, les collectivités locales et les assurés eux-mêmes dans l’assurance, l’adaptation, la prévention et l’atténuation des risques climatiques.

Le régime CatNat : une spécificité française

Le régime de protection contre les catastrophes naturelles (CatNat) a été fondé en 1982 avec pour principes la solidarité nationale (couverture et participation universelle, avec une prime non-indexée sur le risque) et la responsabilité de tous (Loi n° 82-600 du 13 juill. 1982 relative à l’indemnisation des victimes de catastrophes naturelles). Cette responsabilité devait s’exprimer notamment dans l’élaboration d’une cartographie des risques – ou Plans d’exposition aux risques (PER devenus PPR), intégrés aux plans d’occupation des sols (devenus plans locaux d’urbanisme). Pour des raisons de facilité opérationnelle, les assureurs furent immédiatement partie prenante : ce sont eux qui collectent les primes (une surprime sur la prime des contrats multirisques habitation), gèrent les sinistres et peuvent se réassurer auprès de la Caisse centrale de réassurance (CCR), réassureur public, lui-même adossé à la garantie de l’État. La CCR finance ainsi un peu plus de 50 % des sinistres du marché français.

En comparaison à d’autres systèmes nationaux, le régime CatNat est une réussite : il a permis la couverture de la presque totalité du territoire national (hors DOM), pour une prime annuelle modique – de l’ordre de 22 € en moyenne en 2023. Il est aujourd’hui menacé, directement et indirectement, par le réchauffement climatique.

Volet assurantiel

Le rapport commence par le constat d’une aggravation soutenue depuis 2016 des différents périls, portée par les inondations et, plus récemment, le risque de retrait-gonflement des argiles (RGA). En conséquence, le régime CatNat est aujourd’hui déséquilibré. Pour y remédier, une première recommandation consiste à augmenter la surprime de 12 % à 20 % (soit une augmentation moyenne de 15 € par an par contrat), recommandation déjà adoptée en fin d’année et qui entrera en vigueur au renouvellement 2025. De plus, attendu que le réchauffement climatique devrait se poursuivre à horizon 2050, le rapport recommande également une indexation annuelle de 1 % des primes ainsi qu’une indexation des [la suite chez Dalloz Actualité…]