Risque de sécheresse et de subsidence

Jeudi, en arrivant sur Paris, je ferai une intervention pour présenter predicting drought and subsidence risks in France, publié dans le numéro spécial Drought vulnerability, risk, and impact assessments: bridging… de NHESS (Nat. Hazards Earth Syst. Sci.), écrit avec Molly James, et Hani Ali, ainsi que le travail sur les inondations, en France, Insurance against natural catastrophes: balancing actuarial fairness and social solidarity, lors d’une discussion avec Marc Bagarry

Les slides sont en ligne.

Using “home made” statistics

Since I am still at the Fields Institute in Toronto, enjoying a workshop on Impacts of Climate Change on Economics, Finance, and Insurance, I wanted to share some experience, from this summer. After three years of lockdown because of the covid situation, the family has been able to travel, and we went to France, so that our kids could see their grand-parents (the last visit was a long time ago). And it was hot, very hot. While I was chating with my dad, about the weather, and told me that he had a lot of connected devices in the house, including measures of the temperature. One of the device was in a place where nothing did not really change over time. So I thought it could be sufficient to get robust data. My goal was to see how the popular IPCC graph was on real data

When I got the data, I did plot them, and did compare the distribution back in 2012, and in 2022 (or to be honest, half 2021-half 2022). As for the IPCC graph, I assume a Gaussian distribution.

As expected, there is a clear shift to the right (that is “climate change”). But the most scary part, was actually the linear trend,

Coefficients:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept) -637.30455   80.44650  -7.922 3.01e-15 ***
x              0.32273    0.03988   8.092 7.72e-16 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1

with a slope of 0.322, meaning that the average temperature is increasing by 0.322 degrees per year ! That is more than 3°C over the past ten years ! Let me write it again : in a house, +3°C on average over the past ten years.

I thought there were some issues with the data. So I tried to collect some official data, and since there were no official records in their village, I did use the data from Lyon (which is 80 kilometers from their house).

The shift on the right is confirmed here, but unfortuntely, I could not get data after 2020.  Now

Coefficients:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept) -567.27953   96.26577  -5.893 4.17e-09 ***
x              0.28803    0.04776   6.031 1.81e-09 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1

And here again, I have a slope close to 0.3. So again, mainland, about +3°C over the past 10 years was observed. You might not find that scary, but I do think that it is scary !

Statistique bayésienne, data sciences et nouveaux risques

En cette rentrée, l’Institut des Actuaires lance un cycle de conférences sur le thème Statistique bayésienne, data sciences et nouveaux risques. Comme ils m’ont fait le plaisir et l’honneur d’introduire ce cycle, je ferais un exposé introductif jeudi 22 septembre, en fin d’après midi. Mes slides sont en ligne (je déconseille de les imprimer, j’ai mis des animations qui s’étalent sur plusieurs slides, je conseille plutôt cette version).

Deutsche Gesellschaft für Versicherungs- und Finanzmathematik

Next week the Convention-A conference will take place, with more than 1000 participants, 200 sessions, and I was invited to give a talk in a Machine Learning session, organized by the Deutsche Gesellschaft für Versicherungs- und Finanzmathematik on Tuesday. It will be based on the revised version of our paper a fair pricing model via adversarial learning. Slides are now available online.

Workshop on Impacts of Climate Change on Economics, Finance, and Insurance

Next week, I will be at the Fields Institute in Toronto, for a workshop on Impacts of Climate Change on Economics, Finance, and Insurance. The slides of my talks are now online. I will briefly get back on three papers, about insurance of natural catastrophes, starting with Insurance against Natural Catastrophes: Balancing Actuarial Fairness and Social Solidarity, written with Molly James and Laurence Barry, and Predicting Drought and Subsidence Risks in France, written with Molly James and Hani Ali, and finally, I will get back on a more recent paper, Government Intervention in Catastrophe Insurance Markets: A Reinforcement Learning Approach written with Menna Hassan and Nourhan Sakr.

Collaborative Insurance Sustainability and Network Structure

A second version of Collaborative Insurance Sustainability and Network Structure is now available on ArXiv,

The peer-to-peer (P2P) economy has been growing with the advent of the Internet, with well known brands such as Uber or Airbnb being examples thereof. In the insurance sector the approach is still in its infancy, but some companies have started to explore P2P-based collaborative insurance products (eg. Lemonade in the U.S. or Inspeer in France). The actuarial literature only recently started to consider those risk sharing mechanisms, as in Denuit and Robert (2021) or Feng et al. (2021). In this paper, describe and analyse such a P2P product, with some reciprocal risk sharing contracts. Here, we consider the case where policyholders still have an insurance contract, but the first self-insurance layer, below the deductible, can be shared with friends. We study the impact of the shape of the network (through the distribution of degrees) on the risk reduction. We consider also some optimal setting of the reciprocal commitments, and discuss the introduction of contracts with friends of friends to mitigate some possible drawbacks of having people without enough connections to exchange risks.

Les biais, les discriminations et l’équité en assurance

Sur Variances, un court article pour présenter le rapport remis au début de l’été à l’Institut Louis Bachelier, Assurance : Discrimination, biais et équité.

Les données massives et les performances obtenues par les algorithmes d’apprentissage automatique ont chamboulé l’assurance et l’actuariat. Les questions soulevées par ces nouveaux outils dans d’autres contextes (que ce soit la justice prédictive (ou justice “actuarielle” comme l’appelle Harcourt (2008)) ou les débats sur les fake news, en passant par les véhicules autonomes et la médecine prédictive) poussent les actuaires au doute, et à la méfiance. Kranzberg (1986) affirmait que “technology is neither good nor bad; nor is it neutral”, mettant en avant que, même sans mauvaises intentions, les algorithmes d’apprentissage pouvaient être injustes. Et corriger ces possibles injustices n’est pas simple. Pour Nielsen (2020), “technology does not necessarily self-regulate, via either market or social pressures” (la main invisible des marchés ou de la pression sociale ne suffira peut être pas). C’est dans ce contexte que nous allons revenir ici sur les problématiques de biais, de discrimination et d’équité, des modèles prédictifs utilisés en assurance. Ces changements, tant sur les données que sur les modèles, que l’on observe depuis une petite dizaine d’années, avaient déjà questionné l’existence même de l’assurance (à suivre).

Exposé sur l’équité en assurance, pour la chaire DIALog et CNP Assurances

Vendredi matin (juste avant mon cours, le premier de la session), je donnerai un exposé sur le thème “Assurance, biais, discrimination et équité“, en visio (web-coffee conference), pour la rentrée de la chaire DIALog (digital insurance and long term risk). Les slides sont en ligne. L’exposé sera largement inspiré du rapport Assurance, biais, discrimination et équité (Insurance, biaises, discrimination and fairness en anglais – la présentation sera en anglais).

.

Monty Hall problem, with Thompson sampling

We all know the Monty Hall problem. Recently, Jason Rosenhouse published a book on that topic (entitled The Monty Hall Problem, The Remarkable Story of Math’s Most Contentious Brain Teaser). The game is more or less described by the following question

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

While I was preparing some slides for a lecture on Bayesian modeling and thinking, I wanted to find an illustration of what is sometimes called the Bayesian brain, that can be related to updates of beliefs, when we experience. And I was looking for examples of Thompson sampling. And actually, it is possible to learn that switching is the optimal strategy, in the Monty Hall problem, just by playing sequentially the game, and learning from previous strategies. The following code is used, to choose the door with the price (the car), and the one we first select

set.seed(1)
n = 5000
listdoor = matrix(1:3,3,n)
door = listdoor
win = sample(1:3,size=n,replace=TRUE)
pick1 = sample(1:3,size=n,replace=TRUE)

Then, the presenter picks one, that is neither the car, nor the one we chose initially. The following trick can be used, to get the list of available choices

door[win+(0:(n-1))*3] = NA
door[,1:10]
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] NA NA NA 1 NA NA 1 NA 1 NA
[2,] 2 2 NA NA 2 2 2 NA NA 2
[3,] 3 NA 3 3 NA NA NA 3 NA NA
door[pick1+(0:(n-1))*3] = NA
door[,1:10]
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] NA NA NA 1 NA NA 1 NA 1 NA
[2,] 2 2 NA NA 2 2 2 NA NA 2
[3,] 3 NA 3 3 NA NA NA 3 NA NA

Then, the presenter picks one

presenter = apply(door,2, function(x) sample(x[!is.na(x)],size=1))
> presenter[win != pick1] = apply(door,2,function(x) x[!is.na(x)])[win != pick1] 
presenter = unlist(presenter)
presenter[1:10]
[1] 3 2 3 1 2 2 2 3 1 2

Now, let us consider the  Monty Hall problem. We have two possible strategies. The first one is to keep the door we chose, initially

pick2a = pick1
gaina = (pick2a==win)
mean(gaina)
[1] 0.3392

As expected, on average, we win with (about) 1 chance out of 3. The second one is to (always) pick the other door (the one left). The code is close to the one we used before

door = listdoor
door[pick1+(0:(n-1))*3] = NA
door[presenter+(0:(n-1))*3] = NA
pick2b = apply(door,2,function(x) x[!is.na(x)])
gainb = (pick2b==win)
mean(gainb)
[1] 0.6608

If you know Monty Hall problem the probability to win is now 2 chance out of 3 (which is what the maths tells us). That is what we have with simulations.

Now, what if we don’t know how to do the maths, and we don’t want to compute it? We can use Thompson sampling to explore, and exploit. In a general context, we have to choose among On a le choix entre K alternatives (here K=2, since we can either keep our initial choice, or pick the other one), and the output is \boldsymbol{X}=(X_1,\cdots, X_K), where X_k\sim\mathcal{B}(\theta_k), but \theta_k is unknow, and we will play the game, and learn. From previous computations, we know that \theta_1=1/3 while \theta_2=2/3.

We use some prior distribution, \theta_k\sim\mathcal{B}eta(\alpha_k,\beta_k), since the Beta distribution is the conjugate of the Bernoulli. At time t, we draw K (independent) Beta variables B_k\sim\mathcal{B}eta(\alpha_k,\beta_k), and pick k^\star = \displaystyle{\underset{k=1,\cdots,K}{\text{argmax}}\{B_k\}}.  Here the code will be

set.seed(2)
X = cbind(pick2a == win,pick2b == win)*1
AB1 = AB2 = tirage = matrix(NA,n,2)
choix = rep(NA,n)
k=1
AB1[k,] = AB2[k,] = c(1,1)
for(k in 1:(n-1)){
tirage[k,] = c(rbeta(1,AB1[k,1],AB1[k,2]),
rbeta(1,AB2[k,1],AB2[k,2]))
choix[k] = which.max(tirage[k,])
if(choix[k] == 1){
AB1[k+1,] = AB1[k,] + c(X[k,1],1-X[k,1])
AB2[k+1,] = AB2[k,] 
}
if(choix[k] == 2){
AB1[k+1,] = AB1[k,] 
AB2[k+1,] = AB2[k,] + c(X[k,2],1-X[k,2])
}}

Before showing some graphs, let us check that indeed, we select more the second strategy (which is here to select the other door)

AB1[n,]
[1] 5 13
AB2[n,]
[1] 3292 1693

Indeed, since the average of a Beta distribution, \mathcal{B}eta(\alpha,\beta) is \alpha/(\alpha+\beta)

AB2[n,1]/(sum(AB2[n,]))
[1] 0.6603811

i.e. the probability to win, with this second strategy is about 2/3 (as obtained previously). We can visualize this on the animation below, with, in red the first strategy (keep your initial choice), in green the second one (select the other door), 0 and 1 respectively if we win, or not. Then we can visualize the evolution of \alpha_2 and \beta_2 on topc, and \alpha_1 and \beta_1 below (the index is time t). Finallly, we have the two variables B_1 and B_2 drawn,

Of course, another simulation would have given different B_1‘s and B_2‘s, but finally, we learn that the second strategy is better, and we learn it quite fast…

Here is another one (just to confirm)

So clearly, even if we don’t know which is the optimal strategy (keep our initial choice, or switch), a player who played that game about 30 times should be able to understand that switching should be a better strategy.

Montréal AI Symposium 2022

In about ten days (Saturday afternoon), I will be presenting a poster on fairness, discrimination and insurance at the Montréal AI Symposium, based on our joint paper The Fairness of Machine Learning in Insurance: New Rags for an Old Man?, written with Laurence Barry. Since the paper was quite literary, I used material from the document Insurance: Discrimination, Biases & Fairness to get more a visual poster. Additional information will come while discussing…

This is a poster used at the conference