This Wednesday, I will be giving a talk at the ASTIN-IAA webinar
Slides are available online.
Parution cette semaine d’un joli dossier intitulé Algorithmes: garder le contrôle, dans l’Actuariel.
Au regard des données des 40 dernières années la fréquence des catastrophes météorologiques et climatiques ne cesse d’augmenter dans le monde. Et les pertes assurées également, en grande partie à cause du développement de l’assurance. La croissance économique, l’augmentation des richesses, l’industrialisation de zones vulnérables et la concentration des populations expliquent une grande partie de l’augmentation, comme le note Botzen et al. (2010).
Figure 1 : Nombre de catastrophes météorologiques et climatiques dans le monde, à partir des données de Munich Re (2019)[i].
Figure 2 : Montants des sinistres causés par les catastrophes météorologiques et climatiques dans le monde, à partir des données de Munich Re (2019).
Les liens entre cette augmentation de la fréquence et de la gravité de ces catastrophes (allant des vagues de chaleur aux épisodes froids, des sécheresses aux pluies diluviennes, ou encore aux tempêtes et ouragans) et le changement climatique sont encore mal connus. Comme le notait IPCC (2013), de faibles changements sur la distribution moyenne des températures peuvent avoir de gros impacts sur les quantiles élevés. Au cours des 10 dernières années, le coût moyenne des aléas naturels en France s’élève à presque 3 milliards d’euros par an, et ce, pour les seuls assureurs. Et lors de la COP21, les assureurs français estimaient que ce coût pourrait doubler dans les prochaines années (en euros constants), la moitié de cette charge étant expliquée par des facteurs liés aux évolutions socio-économiques (augmentation de la somme assurée en grande partie, mais aussi migration vers des zones à risque), et la moitié étant expliquée par des facteurs liés au changement climatique.
Mais au-delà de ces catastrophes, le changement climatique impacte tous les secteurs de l’assurance. L’augmentation des précipitations en Europe (non seulement en moyenne, mais aussi avec une hausse des crues éclair) va affecter les infrastructures souterraines, ou situées à proximité des cours d’eau. Les propriétés situées sur les côtes sont menacées par l’élévation du niveau des mers. Les risques liés aux inondations constituent une catégorie de catastrophe très impactée par le changement climatique, en particulier les risques de submersion marine et les inondations par débordement et ruissellement. Un autre risque est celui lié aux sécheresses, qui endommage aussi les bâtiments par affaissement des sols. La canicule de l’été 2003 en France a causé une hausse des réclamations en assurance construction de l’ordre de 20 %. En assurance vie, la canicule de 2003, exceptionnelle par sa durée, a là aussi montré l’impact potentiel du changement climatique sur les personnes les plus vulnérables (enfants très jeunes, personnes âgées, et malades chroniques). L’assurance agricole est également impactée par les sécheresses et les inondations. Les sécheresses augmentent aussi le risque d’incendie, pouvant détruire forêts et récoltes.
Les assureurs se sont efforcés d’offrir des couvertures dans la mesure du possible. Ces solutions font souvent intervenir les compagnies d’assurance. Dans certains pays, les assureurs n’interviennent pas, et c’est l’État qui prend à sa charge les sinistres (à partir du budget ou d’un fonds spécifique alimenté par une taxe sur les contrats d’assurance). Dans le cas de la France, il existe un mécanisme mixte, appelé mécanisme Cat Nat, basé sur un mélange d’assurance obligatoire et d’intervention publique. Ce subtil équilibre entre les assureurs privés et ; état évolue dans le temps, car tous ont peur des conséquences financières des catastrophes. En particulier, l’état qui avait offert sa caution – par exemple en tant qu’ « assureur en dernier ressort » dans le mécanisme français – se voit contraint de faire figurer cet engagement dans son budget.
Les assureurs sont d’autant plus impliqués que les mesures de prévention, qui doivent être pris au niveau collectif, peuvent avoir un impact très important sur le risque, avec un coût négligeable par rapport aux gains. En 2004, la Banque Mondiale avait ainsi calculé que pour toutes les catastrophes naturelles survenues dans les années 1990, 40 milliards d’€ investis dans des mesures de prévention auraient permis de réduire le coût total de 280 milliards d’€. On retrouve le même ordre de grandeur lorsque l’Association of British Insurers déclarait que chaque livre sterling dépensée dans des mesures de prévention permettrait d’économiser 6 livres sterling en coûts de réparation, lors d’inondations.
Botzen, W. J. W., van den Bergh, J. C. J. M., and Bouwer, L. M. (2010). Climate change and increased risk for the insurance sector: a global perspective and an assessment for the netherlands. Natural Hazards, 52(3), 577–598.
[i] Munich Re NatCatSERVICE
This morning, I will present our joint paper with Michel Denuit and Julien Trufin, Autocalibration for Insurance Pricing with Machine Learning. Code are online on github. as well as the slides.
We recently started a joint research initiative, funded by the AXA Research Fund, to work on unusual data for insurance
Insurers sometimes lack information at the time of claim submission, such as the structure of a building, the presence of health risks common to a group of people, or the spatial diffusion of a pandemic. Unusual data, such as satellite images, personal network connections, and tweets can be used to populate this information gap. In this joint research initiative, we will use images, network data, and texts for risk analysis from an actuarial perspective. Specifically, the project will explore how using unusual data can contribute to smoother claims assessment and reduced data quality risk, allowing for better risk selection and pricing. The project will look at three types of unusual data: pictures/satellite images, network data, and text data.
More information will be shared via a dedicated website (https://jridata.github.io/), even if I will also mention interesting papers, conferences and open-source codes on this blog….
Would you like to put your data science skills to the test?
Imperial College London, Universite du Quebec à Montreal (UQAM), and actuarial institutes in Singapore, the UK, including the IFoA, and Australia, ASTIN, the Casualty Actuarial Society are co-organising a global data science competition.
Would you like to accurately predict the cost of insurance by putting your data science skills to the test? We are hosting two competitions with separate datasets, a loss prediction competition on Kaggle with synthetic workers’ compensation data, and a pricing competition in a simulated market hosted on AI Crowd with real-world motor insurance contracts. Codes can be either in R or python. The competition is being sponsored by a number of different organisations, with a total of US$12,000 in cash prizes to be won. For more information about how to take part please visit www.pricing-game.com
Just a brief post to mention that was invited as a keynote speaker to give a talk at the Summer School on Machine Learning for Economists, in one month,
I will give a talk on machine learning and insurance. More to come, soon…
Today and tomorrow, I will attend the Online International Conference in Actuarial science, data science and finance, organised by colleagues in Lyon. But I won’t be in Lyon, I will be at home, in Montréal…
I will give a talk on wednesday afternoon, on a joint paper with Ewen Gallic and Olivier Cabrignac. Slides are available here, and if I can get a copy of the video, I will share it…
Let us get back on the Titanic dataset,
loc_fichier = "http://freakonometrics.free.fr/titanic.RData" download.file(loc_fichier, "titanic.RData") load("titanic.RData") base = base[!is.na(base$Age),] |
On consider two variables, the age x (the continuous one) and the survivor indicator y (the qualitative one)
X = base$Age Y = base$Survived |
It looks like the age might be a valid explanatory variable in the logistic regression,
summary(glm(Survived~Age,data=base,family=binomial)) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.05672 0.17358 -0.327 0.7438 Age -0.01096 0.00533 -2.057 0.0397 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 964.52 on 713 degrees of freedom Residual deviance: 960.23 on 712 degrees of freedom AIC: 964.23 |
The significance test here has a p-value just below 4%. Actually, one can relate it with the value of the deviance (the null deviance and the residual deviance). Recall thatD=2\big(\log\mathcal{L}(\boldsymbol{y})-\log\mathcal{L}(\widehat{\boldsymbol{\mu}})\big)whileD_0=2\big(\log\mathcal{L}(\boldsymbol{y})-\log\mathcal{L}(\overline{y})\big)Under the assumption that x is worthless, D_0-D tends to a \chi^2 distribution with 1 degree of freedom. And we can compute the p-value dof that likelihood ratio test,
1-pchisq(964.52-960.23,1) [1] 0.03833717 |
(which is consistent with a Gaussian test). But if we consider a nonlinear transformation
summary(glm(Survived~bs(Age),data=base,family=binomial)) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.8648 0.3460 2.500 0.012433 * bs(Age)1 -3.6772 1.0458 -3.516 0.000438 *** bs(Age)2 1.7430 1.1068 1.575 0.115299 bs(Age)3 -3.9251 1.4544 -2.699 0.006961 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 964.52 on 713 degrees of freedom Residual deviance: 948.69 on 710 degrees of freedom |
which seems to be “more significant”
1-pchisq(964.52-948.69,3) [1] 0.001228712 |
So it looks like the variable x is interesting here.
To visualize the non-null correlation, one can consider the condition distribution of x given y=1, and compare it with the condition distribution of x given y=0,
ks.test(X[Y==0],X[Y==1]) Two-sample Kolmogorov-Smirnov test data: X[Y == 0] and X[Y == 1] D = 0.088777, p-value = 0.1324 alternative hypothesis: two-sided |
i.e. with a p-value above 10%, the two distributions are not significatly different.
F0 = function(x) mean(X[Y==0]<=x) F1 = function(x) mean(X[Y==1]<=x) vx = seq(0,80,by=.1) vy0 = Vectorize(F0)(vx) vy1 = Vectorize(F1)(vx) plot(vx,vy0,col="red",type="s") lines(vx,vy1,col="blue",type="s") |
(we can also look at the density, but it looks like that there is not much to see)
An alternative is discretize variable x and to use Pearson’s independence test,
k=5 LV = quantile(X,(0:k)/k) LV[1] = 0 Xc = cut(X,LV) table(Xc,Y) Y Xc 0 1 (0,19] 85 79 (19,25] 92 45 (25,31.8] 77 50 (31.8,41] 81 63 (41,80] 89 53 chisq.test(table(Xc,Y)) Pearson's Chi-squared test data: table(Xc, Y) X-squared = 8.6155, df = 4, p-value = 0.07146 |
The p-value is here 7%, with five categories for the age. And actually, we can compare the p-value
pvalue = function(k=5){ LV = quantile(X,(0:k)/k) LV[1] = 0 Xc = cut(X,LV) chisq.test(table(Xc,Y))$p.value} vk = 2:20 vp = Vectorize(pvalue)(vk) plot(vk,vp,type="l") abline(h=.05,col="red",lty=2) |
which gives a p-value close to 5%, as soon as we have enough categories. In the slides of the course (STT5100), I claim that actually, the age is an important variable when trying to predict if a passenger survived. Test mentioned here are not as conclusive, nevertheless…
On me demandait récemment un (court) article sur le rôle des actuaires en assurance, pour un ouvrage (de droit) à venir. Je me permet de poster ici quelques pistes (tout commentaire sera le bienvenu…)
Il y a deux ans, au travers une superbe exposition, la France célébrait Louis Pasteur, un des grands précurseurs de la vaccination, dont on fêtera le 200ème anniversaire dans quelques mois.
En même temps, la France est devenue le pays ou le scepticisme face à l’efficacité de la vaccination est la plus grande. Et ce manque de confiance à l’égard des vaccins semble être beaucoup plus profond, correspondant à un doute face à la science, et aux autorités publiques.
Continue reading Les autorités publiques face aux risques, de la confiance au doute
Each time we have a case study in my actuarial courses (with real data), students are surprised to have hard time getting a “good” model, and they are always surprised to have a low AUC, when trying to model the probability to claim a loss, to die, to fraud, etc. And each time, I keep saying, “yes, I know, and that’s what we expect because there a lot of ‘randomness’ in insurance”. To be more specific, I decided to run some simulations, and to compute AUCs to see what’s going on. And because I don’t want to waste time fitting models, we will assume that we have each time a perfect model. So I want to show that the upper bound of the AUC is actually quite low ! So it’s not a modeling issue, it is a fondamental issue in insurance !
By ‘perfect model’ I mean the following : \Omega denotes the heterogeneity factor, because people are different. We would love to get \mathbb{P}[Y=1|\Omega]. Unfortunately, \Omega is unobservable ! So we use covariates (like the age of the driver of the car in motor insurance, or of the policyholder in life insurance, etc). Thus, we have data (y_i,\boldsymbol{x}_i)‘s and we use them to train a model, in order to approximate \mathbb{P}[Y=1|\boldsymbol{X}]. And then, we check if our model is good (or not) using the ROC curve, obtained from confusion matrices, comparing y_i‘s and \widehat{y}_i‘s where \widehat{y}_i=1 when \mathbb{P}[Y_i=1|\boldsymbol{x}_i] exceeds a given threshold. Here, I will not try to construct models. I will predict \widehat{y}_i=1 each time the true underlying probability \mathbb{P}[Y_i=1|\omega_i] exceeds a threshold ! The point is that it’s possible to claim a loss (y=1) even if the probability is 3% (and most of the time \widehat{y}=0), and to not claim one (y=0) even if the probability is 97% (and most of the time \widehat{y}=1). That’s the idea with randomness, right ?
So, here p(\omega_1),\cdots,p(\omega_n) denote the probabilities to claim a loss, to die, to fraud, etc. There is heterogeneity here, and this heterogenity can be small, or large. Consider the graph below, to illustrate,
In both cases, there is, on average, 25% chance to claim a loss. But on the left, there is more heterogeneity, more dispersion. To illustrate, I used the arrow, which is a classical 90% interval : 90% of the individuals have a probability to claim a loss in that interval. (here 10%-40%), 5% are below 10% (low risk), and 5% are above 40% (high risk). Later on, we will say that we have 25% on average, with a dispersion of 30% (40% minus 10%). On the right, it’s more 25% on average, with a dispersion of of 15%. What I call dispersion is the difference between the 95% and the 5% quantiles.
Consider now some dataset, with Bernoulli variables y, drawn with those probabilities p(\omega). Then, let us assume that we are able to get a perfect model : I do not estimate a model based on some covariates, here, I assume that I know perfectly the probability (which is true, because I did generate those data). More specifically, to generate a vector of probabilities, here I use a Beta distribution with a given mean, and a given variance (to capture the heterogeneity I mentioned above)
a=m*(m*(1-m)/v-1) b=(1-m)*(m*(1-m)/v-1) p=rbeta(n,a,b) |
from those probabilities, I generate occurences of claims, or deaths,
Y=rbinom(n,size = 1,prob = p) |
Then, I compute the AUC of my “perfect” model,
auc.tmp=performance(prediction(p,Y),"auc") |
And then, I will generate many samples, to compute the average value of the AUC. And actually, we can do that for many values of the mean and the variance of the Beta distribution. Here is the code
library(ROCR) n=1000 ns=200 ab_beta = function(m,inter){ a=uniroot(function(a) qbeta(.95,a,a/m-a)-qbeta(.05,a,a/m-a)-inter, interval=c(.0000001,1000000))$root b=a/m-a return(c(a,b)) } Sim_AUC_mean_inter=function(m=.5,i=.05){ V_auc=rep(NA,ns) b=-1 essai = try(ab<-ab_beta(m,i),TRUE) if(inherits(essai,what="try-error")) a=-1 if(!inherits(essai,what="try-error")){ a=ab[1] b=ab[2] } if((a>=0)&(b>=0)){ for(s in 1:ns){ p=rbeta(n,a,b) Y=rbinom(n,size = 1,prob = p) auc.tmp=performance(prediction(p,Y),"auc") V_auc[s]=as.numeric(auc.tmp@y.values)} L=list(moy_beta=m, var_beat=v, q05=qbeta(.05,a,b), q95=qbeta(.95,a,b), moy_AUC=mean(V_auc), sd_AUC=sd(V_auc), q05_AUC=quantile(V_auc,.05), q95_AUC=quantile(V_auc,.95)) return(L)} if((a<0)|(b<0)){return(list(moy_AUC=NA))}} Vm=seq(.025,.975,by=.025) Vi=seq(.01,.5,by=.01) V=outer(X = Vm,Y = Vi, Vectorize(function(x,y) Sim_AUC_mean_inter(x,y)$moy_AUC)) library("RColorBrewer") image(Vm,Vi,V, xlab="Probability (Average)", ylab="Dispersion (Q95-Q5)", col= colorRampPalette(brewer.pal(n = 9, name = "YlGn"))(101)) contour(Vm,Vi,V,add=TRUE,lwd=2) |
On the x-axis, we have the average probability to claim a loss. Of course, there is a symmetry here. And on the y-axis, we have the dispersion : the lower, the less heterogeneity in the portfolio. For instance, with a 30% chance to claim a loss on average, and 20% dispersion (meaning that in the portfolio, 90% of the insured have between 20% and 40% chance to claim a loss, or 15% and 35% chance), we have on average a 60% AUC. With a perfect model ! So with only a few covariates, having 55% should be great !
My point here is that with a low dispersion, we cannot expect to have a great AUC (again, even with a perfect model). In motor insurance, from my experience, 90% of the insured are between 3% chance and 20% chance to claim a loss ! That’s less than 20% dispersion ! and in that case, even if the (average) probability is rather small, it is very difficult to expect an AUC above 60% or 65% !
Today, I was giving a talk at the Economics department, and I got a very interesting question about some tables I keep showing to explain why insurance companies like segmentation. The tables illustrate three different case. Here, S stands for the individual (random) loss.
As explain, the loss is here on an individual basis, so, per policy, the insurer faces the (random) loss S-\mathbb{E}[S], which is, on average, null. That’s the second line. For the last line, I keep saying that we look at the overall loss of the insurer, but that’s not correct. Here, with a factor n, we would have the variance of the total loss for the insurance company. We just removed the n factor in the table
That’s what we have below. Here again, on average, the insured should have a null profit. And the total variance (which was \text{Var}[S] in our previous example) is now splitted in two parts (that’s basically Pythagoras theorem).
The interpreration is the following
And then, I usually mention the third and last case, more realistic
And here also, there is a nice interpretation, because of the variance decomposition : there is one part that we observed previously, with some ‘perfect pricing’ and an additional part (that is positive) that is related to the fact that the covariates are just a proxy of the risk factor….
The term on the left is then a lower bound, obtained if actually, using our covariate, available for the pricing, we can get the risk factor.
That was my story, but the fact that n (the portfolio size) was not mentioned in the tables was a bit confusing… So I decided to create some graphs to illustrate those three cases
Consider some simple simulations. On the graph on the left, we have on the x-axis the risk factor, and on the y-axis, the loss (going roughly from 0 to 20). The pure premium is the average of those losses. Here, it’s 10. That’s the plain red line (on the left). In the middle, the y-axis is the insured profit/loss per policy. Someone with a loss close to 0 means a gain of 10, someone with a loss close to 20 means a loss of 10. On average, there is no profit (that’s the plain line). And then, on the right, we have the distribution of the profit/loss (per contract). Again, on average it’s 0, with some variance,
Consider here is simple covariate x : assume here that’s we’ve been able to create a binary variable, that can distinguish the low risks and the high risks. Here, there are two levels for the premium. The low premium is close to 6, and the high one is close to 14. That’s again the graph on the left
Then we have the profit/loss per policy for the insured, in the middle. Here, when the loss was close to 0, the gain is smaller : it is 6 (while it was 10 before). When it was close to 10, previously, it meant a 0 profit, but now it’s either a loss of 4, or a gain of 4. The profit/loss distribution is now on the right. There is less dispersion, and less variance. That the decrease of variance we’ve discussed before. To summarize, segmentation does reduce the variability of the result for the insurance company. That’s what we observe on the right.
Assume now that \Omega is observable. And that we use it for our pricing. The premium is now continuous, and it is the red line, on the left. The profit/loss (in the middle) is the difference between the loss, and its expected value (conditional on the risk factor). And on the right, we have the distribution.
As expected, there is much less variability on the profit/loss distribution of the insurance company in that case. And actually, that’s a lower bound for the variance of result of the insurance company… I hope that the graph clarify what’s going on here…
In Thinking, Fast and Slow, Daniel Kahneman discusses at length the importance of stereotypes in understanding many decision-making processes. A so-called System 1 is used for quick decision-making: it allows us to recognize people and objects, helps us focus our attention, and encourages us to fear spiders. It is based on knowledge stored in memory and accessible without intention, and without effort. It can be contrasted with System 2, which allows for more complex decision-making, requiring discipline and sequential reflection. In the first case, our brain uses the stereotypes that govern judgments of representativeness, and uses this heuristic to make decisions. If I cook a fish for friends who have come to eat, I open a bottle of white wine. The cliché “fish goes well with white wine” allows me to make a decision quickly, without having to think about it. Stereotypes are statements about a group that are accepted (at least provisionally) as facts about each member. Whether correct or not, stereotypes are the basic tools for thinking about categories in System 1. But in many cases, a more in-depth, more sophisticated reflection – corresponding to System 2 – will make it possible to make a more judicious, even optimal decision. Without choosing any red wine, a pinot noir could perhaps also be perfectly suitable for roasted red mullets.
“To generalize is to be an idiot, to particularize is the alone distinction of merit” wrote William Blake around 1800, annotating speeches by the painter Joshua Reynolds. Stigmatizing an entire population because of a minority in a decision-making process is a misleading generalization, often punished by society. Moral punishment, but sometimes also legal (when hiring for example) in a society that tends to be civilized, asking not to draw erroneous conclusions about an individual from the statistics of a group to which he would be attached. But isn’t that what the actuary does every day?
For Schauer (2009), this “generalization“, condemned by William Blake, is probably the actuary’s raison d’être: “to be an actuary is to be a specialist in generalization, and actuaries engage in a form of decision-making that is sometimes called actuarial“. If I decide to insure a sports car, I have I am given risky driving characteristics that probably belong to the majority of sports car owners, attributes that I may not share. And as we noted in the introduction, insurance companies, of course, are not the only ones that operate actuarially, according to Schauer’s definition. We all do it, much more often than most of us would probably recognize. We do this when we choose airlines based on their safety record, punctuality or lost luggage. We do this when we associate personal characteristics (a visible tattoo, black or brightly coloured clothing) with behavioural characteristics (such as a propensity for violence) that these personal characteristics would seem to indicate. And we operate in this way when we engage in stereotypes that may be harmless on the basis of nationality, for example by calling French people are rude, or Scots all wear kilts, while at the same time acknowledging that more pernicious stereotypes on the basis of ethnic origin, gender, sexual orientation are too widespread today! As the misconception of the word “prejudice” indicates, many people believe that it is unfair to make individual decisions based on non-universal group characteristics. Even if group allocations have a solid statistical basis. Because the big difference between actuarial science and everyday life is that actuaries have to use a large number of observations. On a personal level, I can thus decide not to travel with such an airline anymore because on three trips, I have experienced two bad experiences. Before deciding that travel insurance deserves a higher premium when flying with this company, it takes more than three observations!
In fact, the question is often whether an insurance company’s refusal to provide coverage, or to increase the premiums it charges for the same coverage, is an injustice when it is based on an actuarially justified (but perhaps not universal) generalization. As Leemens (2000) noted, the question was asked of the legislator when insurers observed that Jewish women from Eastern Europe were particularly vulnerable to breast and ovarian cancer. At the end of 2012, the European Court of Justice put an end to all discrimination based on the gender of policyholders: insurers were no longer able to differentiate between insurance product prices according to whether the member was male or female. But the use of age is still allowed. Indeed, age is often an indicator of a possible decrease in vision or hearing, slower reaction time (and increased risk of sudden disability), etc. And although there are many individual variations, the available data provide important empirical justification.
A major criticism of machine learning models is the lack of interpretation. But very often, the validation of econometric models requires a narrative built around stereotypes. And this narrative is essential, as Pearl & Mackenzie (2018) reminds us. Indeed, in the “The Ladder of Causation“, there are three levels. At the first level, we find the notion of association (or correlation), or even conditional probability, which serve as a basis for the constitution of stereotypes: if we observe
P[carries | brushing your teeth] < P[carries | don’t brush your teeth]
brushing teeth will be associated with a decrease in the probability of having carries. It is also the basis for regression methods, which are based on correlations between the variable of interest and others, wrongly called explanatory. In Figure 1, we can see the daily cycling traffic in Helsinki, and the average temperature. We will tend to prefer the one on the left, showing the evolution of the number of cyclists as a function of temperature, suggesting that temperature could explain the number of cyclists, and not the other way around. But the stereotype doesn’t necessarily focus on the causal link: if I see a lot of cyclists passing through the window, I’ll tell myself it must be hot, or at least warm.
Figure 1: Näytä Data – Author’s visualization
The first level answers the question “what if I see…?“(e.g. “what cycling traffic to expect if the temperature reaches 20°C? “) and this task can be perfectly accomplished by a machine. The second level is the one that makes it possible to understand an effect, an intervention. The question is then “what if I do…? “. To use our example, we are trying to understand the importance of brushing our teeth on the appearance of cavities. What if brushing your teeth is more natural for children with good teeth? We see the third level of the scale coming up, asking the question “what if I had done…?“and based on the idea of a counterfactual model. We are no longer content to measure correlations, we will build a model explaining what would happen by making a change in the causal variables: what would really happen if the child who did not brush his teeth began to do so? For Pearl & Mackenzie (2018) a human being (maybe even an actuary) can make these more advanced arguments than a machine can (yet) do. And very often, these causal patterns are stereotyped. As Charpentier & Diago Barry (2015) points out, in epidemiology, researchers have long questioned the explanation to be given to the fact that small babies of smokers have a higher probability of survival than babies of non-smoking mothers. The intuition that something is wrong comes from prejudices, stereotypes that we have, and that a machine cannot have.
As Antonio & Charpentier (2017) noted, the European “gender directive” has confused many insurers who used gender to construct their rates, as the latter was highly correlated with the frequency of claims. But by introducing telematic data, gender was no longer significant in the regression. Gender has long been used as a proxy to capture an effect that can be observed using telematic data, giving rise to many sexist stereotypes and other stereotypes.
But the stories also make it possible to decide between a false correlation (“spurious correlation“) and a correlation that could be interpreted. In Figure 2, we have life expectancy at birth, a variable that we could try to explain in a pension study context, for example, by French department. On the right, two variables taken at random: the number of licenses of a tennis club, and the number of advertising agencies. Stereotypes are what will allow us to construct a causal graph, allowing us to understand why there is such a strong correlation between these variables and life expectancy.
Figure 2: Life expectancy at birth for men, left. At the centre, number of tennis licenses per 100,000 inhabitants (source FFT). On the right, number of advertising agencies per 100,000 inhabitants (source INSEE, code NAF 7311Z). Visualization of the author.
While William Blake condemned stereotypes by saying “to generalize is to be an idiot“, he also clearly went further, continuing with “to particularize is the alone distinction of merit“. This individualisation is also advocated by more and more insurers, and even desired by many insureds. But as Grace & Terry (2002) pointed out, many policyholders suffer from a significant optimism bias – “if I have an accident, it will not be my fault” – leading them to doubt the insurer’s classification – “I’m less risky than the others“. And morality seems to prove them right, against actuaries. Yet, not only is generality not, in general, unjust, but justice itself can have considerable elements of generality. To the extent that justice is centred on equity and to the extent that equity itself is closely linked to equality, then equity, and therefore justice, can now be seen as itself based on the idea of generality. The just society is not necessarily a society in which each individual is treated as an isolated set of unique attributes, requiring individualized attention. On the contrary, in some cases, the just society is a society in which generality is not only unavoidable, but also necessary for justice itself. And pooling risks together is the natural response in an insurance context. And it might not be such a big deal if that class is not as homogenous at it could be, or as we would have expected it to be…
Antonio, K. & Charpentier, A. (2017). La tarification par genre en assurance, corrélation ou causalité ?. Risques. 110 : 107-110.
Charpentier, A. & Diago Barry, A. (2015). Big data : passer d’une analyse de corrélation à une interprétation causale. Risques, 101: 107-111.
Grace, J. & Terry, M. (2002). Exploring the Causes of Comparative Optimism. Psychologica Belgica. 42: 65–98
Kahneman, D. (2011).Thinking, Fast and Slow. FSG Eds.
Leemens, T. (2000). Selective Justice, Genetic Discrimination, and Insurance: Should We Single Out Genes in Our Laws? McGill law journal. Revue de droit de McGill 45(2):347-412.
Pearl, J. & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect. Basic Books.
Schauer, F.F. (2009). Profiles, Probabilities, and Stereotypes. Harvard University Press.
The theory of networks, or graphs, was born in 1735, following the work of Leonard Euler, who tried to find a walk – starting from a given point – that would bring us back to that point by passing once and only once through each of the seven bridges in the city of Königsberg. These networks can be compared to metro networks, consisting of stations (nodes), linked between two by rails, or not, or more generally a road network, which can give rise to congestion studies, for example. But today, networks are mainly social, connecting people through friendships, professional, family, or monetary ties. Network analysis makes it possible to create relatively homogeneous communities, accepting to share a risk, recreating a mutualisation.
In genealogy, we will have hierarchical networks, a child being linked to his parents, who are themselves linked to their parents. In sociology, social networks make it possible to analyze the links between individuals (or organizations) within a group. Friendships can be studied in a schoolyard (a link that could be an invitation to a birthday party) or e-mail exchanges in a company (the Enron e-mail database has been widely used, with over 180,000 messages exchanged between 36,000 employees). Figure 1 shows two networks of 20 individuals (A, B, …, T).
Figure 1: Random networks, 20 nodes (Watts-Strogatz and Barbasi)
In a Facebook or Linkedin type vision, we will say that E and F are linked, in the sense of “friends”, if there is a segment linking points E and F. A network can be directed, for example if we study the exchange of messages (E wrote to F), or money loans (E lent money to F). If historically only adjacency was studied (existence or not of links), we can now add weights, for example the amount of a financial loan. Babutsidze (2012) thus studies the positions of French and German banks in interbank lending within the European zone (the nodes are then the banks). The study of networks within village communities in developing countries has led to a better understanding of informal finance mechanisms. Banerjee et al (2013) study the dissemination of information in a network, and more particularly microfinance loans.
While networks are useful for better organizing microcredit, CNN noted in 2015 that Facebook allowed credit organizations to use a borrower’s social network to determine whether or not it represents a good credit risk. In particular, if friends’ credit scores were too low, a person could be denied credit. This situation is dangerous because of the particular properties of networks, and more particularly the paradox of friends.
In 1929, Frigyes Karinthy hypothesized that any person on earth could be connected to any other person by a succession of individual relationships involving at most 6 links. “We should select anyone from the world’s 1.5 billion people, anyone, anywhere. It seems that, using no more than five individuals, one of whom is a personal acquaintance, he could contact the chosen individuals using nothing other than the network of personal acquaintances. This theory of six handshakes originated in a new literary novel. It will be necessary to wait for the work of Michael Gurevich in the 1960s, then Stanley Milgram ten years later, to see the first attempts to quantify these relationships appear, under the name “Small World Problem”.
While Leskovec & Horvitz (2008) confirmed this order of magnitude, by analyzing several billion messages exchanged using the Windows Live Messenger platform, more recently, Baghat et al (2016) estimated that any two people on Facebook were connected by an average of three and a half people. On the random network on the left, a person has, on average, 2 friends, while a random friend has, on average, 2.25 friends. On the right-hand network, the gap is even greater, because if there too a person has, on average, 2 friends, a random friend will have on average more than 4 friends.
Figure 2: Random networks, 500 nodes (Watts-Strogatz and Barbasi)
This paradox, observed in 1991 by sociologist Scott Feld, is very easily demonstrated. Heuristically, we can see a link with the probabilistic property \frac{\mathbb{E}[X^2]}{\mathbb{E}[X]}=\mathbb{E}[X]+\frac{\text{Var}[X]}{\mathbb{E}[X]}>\mathbb{E}[X] where the term on the left is the number of friends of my friends, divided by my number of friends. The difference is all the greater the greater the dispersion of the number of friends. If the left-hand network is very dense, the right-hand network on the other hand has a power law property: the distribution of the number of friends follows a power law (or Zipf law, or Pareto’s law). Figure 3 shows the distribution of the number of friends on a network, in a double logarithmic scale: linearity indicates a distribution according to power. This type of distribution can be found in a very large number of networks, particularly Facebook, as shown by Wohlgemuth & Matache (2014).
Figure 3: Distribution of the number of friends on simulated random networks (Watts-Strogatz and Barbasi in red)
The classic interpretation is that some people are central in the network, with a very large number of connections. This property is well known in marketing (we will then speak of a “peer effect“) but it also has impacts in risk management or public health. Chrisakis & Fowler (2010) have shown that influenza epidemics can be detected almost two weeks in advance, by monitoring the infection in a social network. In particular, the analysis of the health of central people in a network is “an ideal way to predict outbreaks, but detailed information doesn’t exist for most groups, and to produce it would be time-consuming and costly”. To return to the example of the credit score, if it is found to be correlated to the number of friends, the friends paradox makes it dangerous to use the friends’ score as an indicator of an individual’s risk!
Another important feature of networks is the notion of homophilia, introduced in 2001 in sociology by two important articles, corresponding to the tendency to be connected to one’s peers. McPherson et al (2001) assumed that similarity generates connection, and therefore people’s personal networks are homogeneous across many socio-demographic, behavioural and intrapersonal characteristics. Moody (2001) studied friendships in elementary school playgrounds in the United States, with a focus on interracial friendships. Easley & Kleinberg (2010) thus presents a number of consequences of homophilia, ranging from the creation of tables at business meals to the granting of credit in the United States. The measurement of homophilia is the same as asking, taking into account pre-existing groups (according to gender, age, socio-professional category, etc.) how the links are distributed, between groups, or within groups.
Figure 4: Low homophilia (left) and high homophilia (right)
In an insurance context, an actuary seeks to create tariff classes, groups that are homogeneous in terms of risks, according to explanatory variables (the so-called tariff variables). People who live in the same place, drive the same types of vehicles, and have the same characteristics, are likely to be in the same class. But if homophilia exists in a population, a tariff group could perhaps be observed on a network of friends. Why not then consider creating groups within a network?
In this spirit, in 2010, Friendsurance was launched in Germany and has more than 100,000 insured in 2018. In France, a short collaborative insurance experiment was launched in 2015, with Inspeer, offering to share damage insurance deductibles (in car or home insurance) with friends. These types of collaborative insurance, sometimes called peer-to-peer insurance, are based on the formation of small groups by a broker. A portion of the insurance premiums paid is paid to a group fund, the other portion to a third party insurance company. Minor damage suffered by the policyholder is first covered by this group fund. For claims exceeding the deductible, the usual insurer is used. A group can be formed by the insured, forming a social network a bit like Facebook. In this model, the only requirement is that all group members must have the same type of insurance (e. g. liability insurance with legal expenses insurance).
As Schiller (2013) noted, this type of mechanism has many virtues, the first being to reduce costs and the risk of fraud. There is no tendency to cheat on the cost of a claim when the risk is borne by family members or friends. The anonymity of mutuality that exists in the law of large numbers is disappearing. But aren’t we reinventing version 2.0. of the tontine associations, with the strong return of risk sharing within close-knit communities?
References
Joshua Angrist. The perils of peer effects. Labor Economics, 30, 98-108, 2014
Zakaria Babutsidze. Positions of French and German Banks in European interbank lending network. OFCE, Mars 2012.
Abhijit Banerjee, Arun Chandrasekhar, Esther Duflo & Matthew Jackson. Diffusion of Microfinance. Science, 341,
Smriti Bhagat, Moira Burke, Carlos Diuk, Ismail Onur Filiz & Sergey Edunov. Three and a half degrees of separation. Facebook Research, 2016.
Ananya Bhattacharya. Facebook patent: Your friends could help you get a loan – or not. 4 août 2015, CNN,
Nicholas Christakis & James Fowler Social Network Sensors for Early Detection of Contagious Outbreaks. PLoS ONE. 5 (9): e12948. arXiv:1004.4792 2015
David Easley & Jon Kleinberg. Networks, Crowds, and Markets. Cambridge University Press. 2010.
Scott Feld. Why your friends have more friends than you do, American Journal of Sociology, 96 (6): 1464–1477, 1991.
Matthew Jackson. Social and Economic Networks. Princeton University Press, 2010.
Jure Leskovec & Eric Horvitz. Planetary-Scale Views on a Large Instant-Messaging Network. Microsoft Research, 2008.
Miller McPherson, Lynn Smith-Lovin & James Cook. Birds of a Feather: Homophily in Social Networks. Annual Review of Sociology. 27: 415–444, 2001.
James Moody. Race, School Integration, and Friendship Segregation in America. American Journal of Sociology, 107 (3): 679-716, 2001.
Wesley Perkins, Michael Haines & Richard Rice. Misperceiving the college drinking norm and related problems: a nationwide study of exposure to prevention information, perceived norms and student alcohol misuse. Journal of Studies on Alcohol 66 (4) : 470-478, 2005.
Ben Schiller. A Social Network For Insurance That Cuts Costs And Reduces Fraud. Fast Company, October 2013,
Brad Walker. How Peer-to-Peer Companies Are Transforming the Insurance Sector. The Street, Avril 2016,
Jason Wohlgemuth & Mihaela Matache. Small-World Properties of Facebook Group
Networks. Complex Systems, 23 (2014).
[i] Complete data can be downloaded from https://snap.stanford.edu/data/email-Enron.html
[ii] https://www.friendsurance.com/ and https://www.inspeer.me/ respectively