Category Archives: Actuarial science

On the poor performance of classifiers in insurance models

Each time we have a case study in my actuarial courses (with real data), students are surprised to have hard time getting a “good” model, and they are always surprised to have a low AUC, when trying to model the probability to claim a loss, to die, to fraud, etc. And each time, I keep saying, “yes, I know, and that’s what we expect because there a lot of ‘randomness’ in insurance”. To be more specific, I decided to run some simulations, and to compute AUCs to see what’s going on. And because I don’t want to waste time fitting models, we will assume that we have each time a perfect model. So I want to show that the upper bound of the AUC is actually quite low ! So it’s not a modeling issue, it is a fondamental issue in insurance !

By ‘perfect model’ I mean the following : \Omega denotes the heterogeneity factor, because people are different. We would love to get \mathbb{P}[Y=1|\Omega]. Unfortunately, \Omega  is unobservable ! So we use covariates (like the age of the driver of the car in motor insurance, or of the policyholder in life insurance, etc). Thus, we have data (y_i,\boldsymbol{x}_i)‘s and we use them to train a model, in order to approximate \mathbb{P}[Y=1|\boldsymbol{X}]. And then, we check if our model is good (or not) using the ROC curve, obtained from confusion matrices, comparing y_i‘s and \widehat{y}_i‘s where \widehat{y}_i=1 when \mathbb{P}[Y_i=1|\boldsymbol{x}_i] exceeds a given threshold. Here, I will not try to construct models. I will predict \widehat{y}_i=1 each time the true underlying probability \mathbb{P}[Y_i=1|\omega_i] exceeds a threshold ! The point is that it’s possible to claim a loss (y=1) even if the probability is 3% (and most of the time \widehat{y}=0), and to not claim one (y=0) even if the probability is 97% (and most of the time \widehat{y}=1). That’s the idea with randomness, right ?

So, here p(\omega_1),\cdots,p(\omega_n) denote the probabilities to claim a loss, to die, to fraud, etc. There is heterogeneity here, and this heterogenity can be small, or large. Consider the graph below, to illustrate,

In both cases, there is, on average, 25% chance to claim a loss. But on the left, there is more heterogeneity, more dispersion. To illustrate, I used the arrow, which is a classical 90% interval : 90% of the individuals have a probability to claim a loss in that interval. (here 10%-40%), 5% are below 10% (low risk), and 5% are above 40% (high risk). Later on, we will say that we have 25% on average, with a dispersion of 30% (40% minus 10%). On the right, it’s more 25% on average, with a dispersion of of 15%. What I call dispersion is the difference between the 95% and the 5% quantiles.

Consider now some dataset, with Bernoulli variables y, drawn with those probabilities p(\omega). Then, let us assume that we are able to get a perfect model : I do not estimate a model based on some covariates, here, I assume that I know perfectly the probability (which is true, because I did generate those data). More specifically, to generate a vector of probabilities, here I use a Beta distribution with a given mean, and a given variance (to capture the heterogeneity I mentioned above)

a=m*(m*(1-m)/v-1)
b=(1-m)*(m*(1-m)/v-1)
p=rbeta(n,a,b)

from those probabilities, I generate occurences of claims, or deaths,

Y=rbinom(n,size = 1,prob = p)

Then, I compute the AUC of my “perfect” model,

auc.tmp=performance(prediction(p,Y),"auc")

And then, I will generate many samples, to compute the average value of the AUC. And actually, we can do that for many values of the mean and the variance of the Beta distribution. Here is the code

library(ROCR)
n=1000
ns=200
ab_beta = function(m,inter){
  a=uniroot(function(a) qbeta(.95,a,a/m-a)-qbeta(.05,a,a/m-a)-inter,
            interval=c(.0000001,1000000))$root
  b=a/m-a
  return(c(a,b))
}
Sim_AUC_mean_inter=function(m=.5,i=.05){
  V_auc=rep(NA,ns)
  b=-1
  essai = try(ab<-ab_beta(m,i),TRUE) if(inherits(essai,what="try-error")) a=-1 if(!inherits(essai,what="try-error")){ a=ab[1] b=ab[2] } if((a>=0)&(b>=0)){
    for(s in 1:ns){
      p=rbeta(n,a,b)
      Y=rbinom(n,size = 1,prob = p)
      auc.tmp=performance(prediction(p,Y),"auc")
      V_auc[s]=as.numeric(auc.tmp@y.values)}
    L=list(moy_beta=m,
           var_beat=v,
           q05=qbeta(.05,a,b),
           q95=qbeta(.95,a,b),
           moy_AUC=mean(V_auc),
           sd_AUC=sd(V_auc),
           q05_AUC=quantile(V_auc,.05),
           q95_AUC=quantile(V_auc,.95))
    return(L)}
  if((a<0)|(b<0)){return(list(moy_AUC=NA))}}
Vm=seq(.025,.975,by=.025)
Vi=seq(.01,.5,by=.01)
V=outer(X = Vm,Y = Vi, Vectorize(function(x,y) 
Sim_AUC_mean_inter(x,y)$moy_AUC))
library("RColorBrewer")
image(Vm,Vi,V,
      xlab="Probability (Average)",
      ylab="Dispersion (Q95-Q5)",
      col=
        colorRampPalette(brewer.pal(n = 9, name = "YlGn"))(101))
contour(Vm,Vi,V,add=TRUE,lwd=2)

On the x-axis, we have the average probability to claim a loss. Of course, there is a symmetry here. And on the y-axis, we have the dispersion : the lower, the less heterogeneity in the portfolio. For instance, with a 30% chance to claim a loss on average, and 20% dispersion (meaning that in the portfolio, 90% of the insured have between 20% and 40% chance to claim a loss, or 15% and 35% chance), we have on average a 60% AUC. With a perfect model ! So with only a few covariates, having 55% should be great !

My point here is that with a low dispersion, we cannot expect to have a great AUC (again, even with a perfect model). In motor insurance, from my experience, 90% of the insured are between 3% chance and 20% chance to claim a loss ! That’s less than 20% dispersion ! and in that case, even if the (average) probability is rather small, it is very difficult to expect an AUC above 60% or 65% !

Variance decomposition and price segmentation in Insurance

Today, I was giving a talk at the Economics department, and I got a very interesting question about some tables I keep showing to explain why insurance companies like segmentation. The tables illustrate three different case. Here, S stands for the individual (random) loss.

  • the first one is the case where the premium asked is the same for all the insured – i.e. the pure premium \mathbb{E}[S]

As explain, the loss is here on an individual basis, so, per policy, the insurer faces the (random) loss S-\mathbb{E}[S], which is, on average, null. That’s the second line. For the last line, I keep saying that we look at the overall loss of the insurer, but that’s not correct. Here, with a factor n, we would have the variance of the total loss for the insurance company. We just removed the n factor in the table

  • then, we have perfectly observable heterogeneity : insured have a risk factor \Omega, obervable, and in that case, the ‘pure’ premium is \mathbb{E}[S|\Omega]

That’s what we have below. Here again, on average, the insured should have a null profit. And the total variance (which was \text{Var}[S] in our previous example) is now splitted in two parts (that’s basically Pythagoras theorem).

The interpreration is the following

And then, I usually mention the third and last case, more realistic

  • the risk factor \Omega is not obervable, but segmentation is still possible using some proxy of the risk factor, obtained using some covariates, and the ‘pure’ premium is \mathbb{E}[S|\boldsymbol{X}]

And here also, there is a nice interpretation, because of the variance decomposition : there is one part that we observed previously, with some ‘perfect pricing’ and an additional part (that is positive) that is related to the fact that the covariates are just a proxy of the risk factor….

The term on the left is then a lower bound, obtained if actually, using our covariate, available for the pricing, we can get the risk factor.

That was my story, but the fact that n (the portfolio size) was not mentioned in the tables was a bit confusing… So I decided to create some graphs to illustrate those three cases

  • same premium for everyone

Consider some simple simulations. On the graph on the left, we have on the x-axis the risk factor, and on the y-axis, the loss (going roughly from 0 to 20). The pure premium is the average of those losses. Here, it’s 10. That’s the plain red line (on the left). In the middle, the y-axis is the insured profit/loss per policy. Someone with a loss close to 0 means a gain of 10, someone with a loss close to 20 means a loss of 10. On average, there is no profit (that’s the plain line). And then, on the right, we have the distribution of the profit/loss (per contract). Again, on average it’s 0, with some variance,

  • premium based on covariates

Consider here is simple covariate x : assume here that’s we’ve been able to create a binary variable, that can distinguish the low risks and the high risks. Here, there are two levels for the premium. The low premium is close to 6, and the high one is close to 14. That’s again the graph on the left

Then we have the profit/loss per policy for the insured, in the middle. Here, when the loss was close to 0, the gain is smaller : it is 6 (while it was 10 before). When it was close to 10, previously, it meant a 0 profit, but now it’s either a loss of 4, or a gain of 4. The profit/loss distribution is now on the right. There is less dispersion, and less variance. That the decrease of variance we’ve discussed before. To summarize, segmentation does reduce the variability of the result for the insurance company. That’s what we observe on the right.

  • premium based on the risk factor

Assume now that \Omega is observable. And that we use it for our pricing. The premium is now continuous, and it is the red line, on the left. The profit/loss (in the middle) is the difference between the loss, and its expected value (conditional on the risk factor). And on the right, we have the distribution.

As expected, there is much less variability on the profit/loss distribution of the insurance company in that case. And actually, that’s a lower bound for the variance of result of the insurance company… I hope that the graph clarify what’s going on here…

Do risk classes go beyond stereotypes?

Generalization, stereotypes and clichés

In Thinking, Fast and Slow, Daniel Kahneman discusses at length the importance of stereotypes in understanding many decision-making processes. A so-called System 1 is used for quick decision-making: it allows us to recognize people and objects, helps us focus our attention, and encourages us to fear spiders. It is based on knowledge stored in memory and accessible without intention, and without effort. It can be contrasted with System 2, which allows for more complex decision-making, requiring discipline and sequential reflection. In the first case, our brain uses the stereotypes that govern judgments of representativeness, and uses this heuristic to make decisions. If I cook a fish for friends who have come to eat, I open a bottle of white wine. The cliché “fish goes well with white wine” allows me to make a decision quickly, without having to think about it. Stereotypes are statements about a group that are accepted (at least provisionally) as facts about each member. Whether correct or not, stereotypes are the basic tools for thinking about categories in System 1. But in many cases, a more in-depth, more sophisticated reflection – corresponding to System 2 – will make it possible to make a more judicious, even optimal decision. Without choosing any red wine, a pinot noir could perhaps also be perfectly suitable for roasted red mullets.

To generalize is to be an idiot, to particularize is the alone distinction of merit” wrote William Blake around 1800, annotating speeches by the painter Joshua Reynolds. Stigmatizing an entire population because of a minority in a decision-making process is a misleading generalization, often punished by society. Moral punishment, but sometimes also legal (when hiring for example) in a society that tends to be civilized, asking not to draw erroneous conclusions about an individual from the statistics of a group to which he would be attached. But isn’t that what the actuary does every day?

The usual suspects

For Schauer (2009), this “generalization“, condemned by William Blake, is probably the actuary’s raison d’être: “to be an actuary is to be a specialist in generalization, and actuaries engage in a form of decision-making that is sometimes called actuarial“. If I decide to insure a sports car, I have I am given risky driving characteristics that probably belong to the majority of sports car owners, attributes that I may not share. And as we noted in the introduction, insurance companies, of course, are not the only ones that operate actuarially, according to Schauer’s definition. We all do it, much more often than most of us would probably recognize. We do this when we choose airlines based on their safety record, punctuality or lost luggage. We do this when we associate personal characteristics (a visible tattoo, black or brightly coloured clothing) with behavioural characteristics (such as a propensity for violence) that these personal characteristics would seem to indicate. And we operate in this way when we engage in stereotypes that may be harmless on the basis of nationality, for example by calling French people are rude, or Scots all wear kilts, while at the same time acknowledging that more pernicious stereotypes on the basis of ethnic origin, gender, sexual orientation are too widespread today! As the misconception of the word “prejudice” indicates, many people believe that it is unfair to make individual decisions based on non-universal group characteristics. Even if group allocations have a solid statistical basis. Because the big difference between actuarial science and everyday life is that actuaries have to use a large number of observations. On a personal level, I can thus decide not to travel with such an airline anymore because on three trips, I have experienced two bad experiences. Before deciding that travel insurance deserves a higher premium when flying with this company, it takes more than three observations!

In fact, the question is often whether an insurance company’s refusal to provide coverage, or to increase the premiums it charges for the same coverage, is an injustice when it is based on an actuarially justified (but perhaps not universal) generalization. As Leemens (2000) noted, the question was asked of the legislator when insurers observed that Jewish women from Eastern Europe were particularly vulnerable to breast and ovarian cancer. At the end of 2012, the European Court of Justice put an end to all discrimination based on the gender of policyholders: insurers were no longer able to differentiate between insurance product prices according to whether the member was male or female. But the use of age is still allowed. Indeed, age is often an indicator of a possible decrease in vision or hearing, slower reaction time (and increased risk of sudden disability), etc. And although there are many individual variations, the available data provide important empirical justification.

Machines, causality, and stereotypes

A major criticism of machine learning models is the lack of interpretation. But very often, the validation of econometric models requires a narrative built around stereotypes. And this narrative is essential, as Pearl & Mackenzie (2018) reminds us. Indeed, in the “The Ladder of Causation“, there are three levels. At the first level, we find the notion of association (or correlation), or even conditional probability, which serve as a basis for the constitution of stereotypes: if we observe

P[carries | brushing your teeth] < P[carries | don’t brush your teeth]

brushing teeth will be associated with a decrease in the probability of having carries. It is also the basis for regression methods, which are based on correlations between the variable of interest and others, wrongly called explanatory. In Figure 1, we can see the daily cycling traffic in Helsinki, and the average temperature. We will tend to prefer the one on the left, showing the evolution of the number of cyclists as a function of temperature, suggesting that temperature could explain the number of cyclists, and not the other way around. But the stereotype doesn’t necessarily focus on the causal link: if I see a lot of cyclists passing through the window, I’ll tell myself it must be hot, or at least warm.

Figure 1: Näytä Data – Author’s visualization

The first level answers the question “what if I see…?“(e.g. “what cycling traffic to expect if the temperature reaches 20°C? “) and this task can be perfectly accomplished by a machine. The second level is the one that makes it possible to understand an effect, an intervention. The question is then “what if I do…? “. To use our example, we are trying to understand the importance of brushing our teeth on the appearance of cavities. What if brushing your teeth is more natural for children with good teeth? We see the third level of the scale coming up, asking the question “what if I had done…?“and based on the idea of a counterfactual model. We are no longer content to measure correlations, we will build a model explaining what would happen by making a change in the causal variables: what would really happen if the child who did not brush his teeth began to do so? For Pearl & Mackenzie (2018) a human being (maybe even an actuary) can make these more advanced arguments than a machine can (yet) do. And very often, these causal patterns are stereotyped. As Charpentier & Diago Barry (2015) points out, in epidemiology, researchers have long questioned the explanation to be given to the fact that small babies of smokers have a higher probability of survival than babies of non-smoking mothers. The intuition that something is wrong comes from prejudices, stereotypes that we have, and that a machine cannot have.

When actuaries tell each other stories

As Antonio & Charpentier (2017) noted, the European “gender directive” has confused many insurers who used gender to construct their rates, as the latter was highly correlated with the frequency of claims. But by introducing telematic data, gender was no longer significant in the regression. Gender has long been used as a proxy to capture an effect that can be observed using telematic data, giving rise to many sexist stereotypes and other stereotypes.

But the stories also make it possible to decide between a false correlation (“spurious correlation“) and a correlation that could be interpreted. In Figure 2, we have life expectancy at birth, a variable that we could try to explain in a pension study context, for example, by French department. On the right, two variables taken at random: the number of licenses of a tennis club, and the number of advertising agencies. Stereotypes are what will allow us to construct a causal graph, allowing us to understand why there is such a strong correlation between these variables and life expectancy.

Figure 2: Life expectancy at birth for men, left. At the centre, number of tennis licenses per 100,000 inhabitants (source FFT). On the right, number of advertising agencies per 100,000 inhabitants (source INSEE, code NAF 7311Z). Visualization of the author.

Hyper-individualization as an answer?

While William Blake condemned stereotypes by saying “to generalize is to be an idiot“, he also clearly went further, continuing with “to particularize is the alone distinction of merit“. This individualisation is also advocated by more and more insurers, and even desired by many insureds. But as Grace & Terry (2002) pointed out, many policyholders suffer from a significant optimism bias – “if I have an accident, it will not be my fault” – leading them to doubt the insurer’s classification – “I’m less risky than the others“. And morality seems to prove them right, against actuaries. Yet, not only is generality not, in general, unjust, but justice itself can have considerable elements of generality. To the extent that justice is centred on equity and to the extent that equity itself is closely linked to equality, then equity, and therefore justice, can now be seen as itself based on the idea of generality. The just society is not necessarily a society in which each individual is treated as an isolated set of unique attributes, requiring individualized attention. On the contrary, in some cases, the just society is a society in which generality is not only unavoidable, but also necessary for justice itself. And pooling risks together is the natural response in an insurance context. And it might not be such a big deal if that class is not as homogenous at it could be, or as we would have expected it to be…

Antonio, K. & Charpentier, A. (2017).  La tarification par genre en assurance, corrélation ou causalité ?. Risques. 110 : 107-110.

Charpentier, A. & Diago Barry, A. (2015). Big data : passer d’une analyse de corrélation à une interprétation causale. Risques, 101: 107-111.

Grace, J. & Terry, M. (2002). Exploring the Causes of Comparative Optimism. Psychologica Belgica. 42: 65–98

Kahneman, D. (2011).Thinking, Fast and Slow. FSG Eds.

Leemens, T. (2000). Selective Justice, Genetic Discrimination, and Insurance: Should We Single Out Genes in Our Laws? McGill law journal. Revue de droit de McGill 45(2):347-412.

Pearl, J. & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect. Basic Books.

Schauer, F.F. (2009). Profiles, Probabilities, and Stereotypes. Harvard University Press.

Networks to reinvent insurance?

The theory of networks, or graphs, was born in 1735, following the work of Leonard Euler, who tried to find a walk – starting from a given point – that would bring us back to that point by passing once and only once through each of the seven bridges in the city of Königsberg. These networks can be compared to metro networks, consisting of stations (nodes), linked between two by rails, or not, or more generally a road network, which can give rise to congestion studies, for example. But today, networks are mainly social, connecting people through friendships, professional, family, or monetary ties. Network analysis makes it possible to create relatively homogeneous communities, accepting to share a risk, recreating a mutualisation.

Network and credit

In genealogy, we will have hierarchical networks, a child being linked to his parents, who are themselves linked to their parents. In sociology, social networks make it possible to analyze the links between individuals (or organizations) within a group. Friendships can be studied in a schoolyard (a link that could be an invitation to a birthday party) or e-mail exchanges in a company (the Enron e-mail database has been widely used, with over 180,000 messages exchanged between 36,000 employees). Figure 1 shows two networks of 20 individuals (A, B, …, T).

 

Figure 1: Random networks, 20 nodes (Watts-Strogatz and Barbasi)

In a Facebook or Linkedin type vision, we will say that E and F are linked, in the sense of “friends”, if there is a segment linking points E and F. A network can be directed, for example if we study the exchange of messages (E wrote to F), or money loans (E lent money to F). If historically only adjacency was studied (existence or not of links), we can now add weights, for example the amount of a financial loan. Babutsidze (2012) thus studies the positions of French and German banks in interbank lending within the European zone (the nodes are then the banks). The study of networks within village communities in developing countries has led to a better understanding of informal finance mechanisms. Banerjee et al (2013) study the dissemination of information in a network, and more particularly microfinance loans.

While networks are useful for better organizing microcredit, CNN noted in 2015 that Facebook allowed credit organizations to use a borrower’s social network to determine whether or not it represents a good credit risk. In particular, if friends’ credit scores were too low, a person could be denied credit. This situation is dangerous because of the particular properties of networks, and more particularly the paradox of friends.

From the very small world to the paradox of friends

In 1929, Frigyes Karinthy hypothesized that any person on earth could be connected to any other person by a succession of individual relationships involving at most 6 links. “We should select anyone from the world’s 1.5 billion people, anyone, anywhere. It seems that, using no more than five individuals, one of whom is a personal acquaintance, he could contact the chosen individuals using nothing other than the network of personal acquaintances. This theory of six handshakes originated in a new literary novel. It will be necessary to wait for the work of Michael Gurevich in the 1960s, then Stanley Milgram ten years later, to see the first attempts to quantify these relationships appear, under the name “Small World Problem”.

While Leskovec & Horvitz (2008) confirmed this order of magnitude, by analyzing several billion messages exchanged using the Windows Live Messenger platform, more recently, Baghat et al (2016) estimated that any two people on Facebook were connected by an average of three and a half people. On the random network on the left, a person has, on average, 2 friends, while a random friend has, on average, 2.25 friends. On the right-hand network, the gap is even greater, because if there too a person has, on average, 2 friends, a random friend will have on average more than 4 friends.

 

Figure 2: Random networks, 500 nodes (Watts-Strogatz and Barbasi)

This paradox, observed in 1991 by sociologist Scott Feld, is very easily demonstrated. Heuristically, we can see a link with the probabilistic property \frac{\mathbb{E}[X^2]}{\mathbb{E}[X]}=\mathbb{E}[X]+\frac{\text{Var}[X]}{\mathbb{E}[X]}>\mathbb{E}[X] where the term on the left is the number of friends of my friends, divided by my number of friends. The difference is all the greater the greater the dispersion of the number of friends. If the left-hand network is very dense, the right-hand network on the other hand has a power law property: the distribution of the number of friends follows a power law (or Zipf law, or Pareto’s law). Figure 3 shows the distribution of the number of friends on a network, in a double logarithmic scale: linearity indicates a distribution according to power. This type of distribution can be found in a very large number of networks, particularly Facebook, as shown by Wohlgemuth & Matache (2014).

 

Figure 3: Distribution of the number of friends on simulated random networks (Watts-Strogatz and Barbasi in red)

The classic interpretation is that some people are central in the network, with a very large number of connections. This property is well known in marketing (we will then speak of a “peer effect“) but it also has impacts in risk management or public health. Chrisakis & Fowler (2010) have shown that influenza epidemics can be detected almost two weeks in advance, by monitoring the infection in a social network. In particular, the analysis of the health of central people in a network is “an ideal way to predict outbreaks, but detailed information doesn’t exist for most groups, and to produce it would be time-consuming and costly”. To return to the example of the credit score, if it is found to be correlated to the number of friends, the friends paradox makes it dangerous to use the friends’ score as an indicator of an individual’s risk!

The importance of homophilia

Another important feature of networks is the notion of homophilia, introduced in 2001 in sociology by two important articles, corresponding to the tendency to be connected to one’s peers. McPherson et al (2001) assumed that similarity generates connection, and therefore people’s personal networks are homogeneous across many socio-demographic, behavioural and intrapersonal characteristics. Moody (2001) studied friendships in elementary school playgrounds in the United States, with a focus on interracial friendships. Easley & Kleinberg (2010) thus presents a number of consequences of homophilia, ranging from the creation of tables at business meals to the granting of credit in the United States. The measurement of homophilia is the same as asking, taking into account pre-existing groups (according to gender, age, socio-professional category, etc.) how the links are distributed, between groups, or within groups.

 

Figure 4: Low homophilia (left) and high homophilia (right)

In an insurance context, an actuary seeks to create tariff classes, groups that are homogeneous in terms of risks, according to explanatory variables (the so-called tariff variables). People who live in the same place, drive the same types of vehicles, and have the same characteristics, are likely to be in the same class. But if homophilia exists in a population, a tariff group could perhaps be observed on a network of friends. Why not then consider creating groups within a network?

Using insurance networks

In this spirit, in 2010, Friendsurance was launched in Germany and has more than 100,000 insured in 2018. In France, a short collaborative insurance experiment was launched in 2015, with Inspeer, offering to share damage insurance deductibles (in car or home insurance) with friends. These types of collaborative insurance, sometimes called peer-to-peer insurance, are based on the formation of small groups by a broker. A portion of the insurance premiums paid is paid to a group fund, the other portion to a third party insurance company. Minor damage suffered by the policyholder is first covered by this group fund. For claims exceeding the deductible, the usual insurer is used. A group can be formed by the insured, forming a social network a bit like Facebook. In this model, the only requirement is that all group members must have the same type of insurance (e. g. liability insurance with legal expenses insurance).

As Schiller (2013) noted, this type of mechanism has many virtues, the first being to reduce costs and the risk of fraud. There is no tendency to cheat on the cost of a claim when the risk is borne by family members or friends. The anonymity of mutuality that exists in the law of large numbers is disappearing. But aren’t we reinventing version 2.0. of the tontine associations, with the strong return of risk sharing within close-knit communities?

References

Joshua Angrist. The perils of peer effects. Labor Economics, 30, 98-108, 2014

Zakaria Babutsidze. Positions of French and German Banks in European interbank lending network. OFCE, Mars 2012.

Abhijit Banerjee, Arun Chandrasekhar, Esther Duflo & Matthew Jackson. Diffusion of Microfinance. Science, 341,

Smriti Bhagat, Moira Burke, Carlos Diuk, Ismail Onur Filiz & Sergey Edunov. Three and a half degrees of separation. Facebook Research, 2016.

Ananya Bhattacharya. Facebook patent: Your friends could help you get a loan – or not. 4 août 2015, CNN,

Nicholas Christakis & James Fowler Social Network Sensors for Early Detection of Contagious Outbreaks. PLoS ONE. 5 (9): e12948. arXiv:1004.4792 2015

David Easley & Jon Kleinberg. Networks, Crowds, and Markets. Cambridge University Press. 2010.

Scott Feld. Why your friends have more friends than you do, American Journal of Sociology, 96 (6): 1464–1477, 1991.

Matthew Jackson. Social and Economic Networks. Princeton University Press, 2010.

Jure Leskovec & Eric Horvitz. Planetary-Scale Views on a Large Instant-Messaging Network. Microsoft Research, 2008.

Miller McPherson, Lynn Smith-Lovin & James Cook. Birds of a Feather: Homophily in Social Networks. Annual Review of Sociology. 27: 415–444, 2001.

James Moody. Race, School Integration, and Friendship Segregation in America. American Journal of Sociology, 107 (3): 679-716, 2001.

Wesley Perkins, Michael Haines & Richard Rice. Misperceiving the college drinking norm and related problems: a nationwide study of exposure to prevention information, perceived norms and student alcohol misuse. Journal of Studies on Alcohol 66 (4) : 470-478, 2005.

Ben Schiller. A Social Network For Insurance That Cuts Costs And Reduces Fraud. Fast Company, October 2013,

Brad Walker. How Peer-to-Peer Companies Are Transforming the Insurance Sector. The Street, Avril 2016,

Jason Wohlgemuth & Mihaela Matache. Small-World Properties of Facebook Group

Networks. Complex Systems, 23 (2014).

[i] Complete data can be downloaded from https://snap.stanford.edu/data/email-Enron.html

[ii] https://www.friendsurance.com/ and https://www.inspeer.me/ respectively

Les réseaux pour réinventer l’assurance ?

La théorie des réseaux, ou des graphs, est née en 1735, suite aux travaux de Léonard Euler, qui essayait de trouver une promenade – à partir d’un point donné – qui fasse revenir à ce point en passant une fois et une seule par chacun des sept ponts de la ville de Königsberg. On peut rapprocher ces réseaux des réseaux de métro, constitués de stations (les nœuds), liés entre deux par des rails, ou pas, ou plus généralement un réseau routier, pouvant donner lieu à des études de congestion, par exemple. Mais les réseaux sont aujourd’hui surtout sociaux, reliant les personnes, par des liens d’amitiés, professionnels, familiaux, ou monétaires. L’analyse des réseaux permet de créer des communautés relativement homogène, acceptant de partager un risque, recréant une mutualisation.

Réseau et crédit

En généalogie, on aura des réseaux hiérarchiques, un enfant étant lié à ses parents, eux-mêmes reliés à leurs parents. En sociologie, les réseaux sociaux permettent d’analyser les liens entre des individus (ou des organisations) au sein d’un ensemble. On pourra étudier les amitiés dans une cour d’école (un lien pouvant être une invitation à un anniversaire) ou des échanges de messages électroniques dans une entreprise (la base des courriels d’Enron a ainsi été abondamment utilisée, avec plus de 180 000 messages échangés entre 36 000 employés[i]). La Figure 1 montre ainsi deux réseaux de 20 individus (A, B, …, T).

Figure 1 : réseaux aléatoires, 20 nœuds (Watts-Strogatz et Barbasi),

Dans une vision de type Facebook ou Linkedin, on dira que E et F sont liés, au sens « amis », s’il existe un segment reliant les points E et F. Un réseau peut être dirigé, par exemple si on étudie les échanges de messages (E a écrit à F), ou des prêts d’argent (E a prêté de l’argent à F). Si historiquement seule l’adjacence était étudiée (existence ou non de liens), on peut aujourd’hui rajouter des poids, par exemple le montant d’un prêt financier. Babutsidze (2012) étudie ainsi les positions de banques françaises et allemandes dans les prêts interbancaires au sein de la zone Europe (les nœuds sont alors les banques). L’étude des réseaux au sein de communautés de villages dans les pays en développement a permis de mieux comprendre les mécanismes de finance informelle. Banerjee et al. (2013) étudient ainsi la diffusion de l’information dans un réseau, et plus particulièrement les prêts de microfinance.

Si les réseaux sont utiles pour mieux organiser le microcrédit, CNN notait en 2015 que Facebook permettait à des organismes de crédit d’utiliser le réseau social d’un emprunteur pour déterminer s’il représente un bon risque de crédit, ou pas. En particulier, si le score de crédit des amis était trop faible, une personne pouvait se voir refuser un crédit. Cette situation est dangereuse à cause de propriétés particulières des réseaux, et plus particulièrement le paradoxe des amis.

Du tout petit monde au paradoxe des amis

En 1929, Frigyes Karinthy a émis l’hypothèse que toute personne sur terre pouvait être relliée à n’importe quelle autre par une succession de relations individuelles comprenant au plus 6 maillons. « Nous devrions sélectionner n’importe quelle personne du 1,5 milliard d’habitants de la planète, n’importe qui, n’importe où. Il paraît que, n’utilisant pas plus de cinq individus, l’un d’entre eux étant une connaissance personnelle, il pourrait contacter les individus choisis en n’utilisant rien d’autre que le réseau des connaissances personnelles ». Cette théorie des six poignées de main a vu son origine  dans une nouvelle littéraire. Il faudra attendre les travaux de

Michael Gurevich dans les années 60, puis Stanley Milgram dix ans après, pour voir apparaître les premières tentatives de quantification de ces relations, sous le nom de « Small World Problem ». Si Leskovec & Horvitz (2008) ont confirmé cet ordre de grandeur, via l’analyse de plusieurs milliards de messages échangés à l’aide de la plateforme Windows Live Messenger, plus récemment, Baghat et al. (2016) ont estimé que deux personnes quelconques sur Facebook étaient connectées par une moyenne de trois personnes et demi. Sur le réseau aléatoire de gauche, une personne a, en moyenne, 2 amis, alors qu’un ami pris au hasard a en moyenne 2.25 amis. Sur le réseau de droite, l’écart est encore plus important, car si là aussi une personne a, en moyenne, 2 amis, un ami pris au hasard aura en moyenne plus de 4 amis.

Figure 2 : réseaux aléatoires, 500 nœuds (Watts-Strogatz et Barbasi)

Ce paradoxe, observé en 1991 par le sociologue Scott Feld se démontre très facilement. Heuristiquement, on peut voir un lien avec la propriété probabiliste\frac{\mathbb{E}[X^2]}{\mathbb{E}[X]}=\mathbb{E}[X]+\frac{\text{Var}[X]}{\mathbb{E}[X]}>\mathbb{E}[X]où le terme de gauche est le nombre d’amis de mes amis, divisé par mon nombre d’amis. La différence est d’autant plus grande que la dispersion du nombre d’amis est importante. Si le réseau de gauche est très dense, celui de droite en revanche possède une propriété de loi puissance : la distribution du nombre d’amis suit une loi en fonction puissance (ou loi de Zipf, ou de Pareto). La Figure 3 montre la distribution du nombre d’amis sur un réseau, dans double échelle logarithmique : la linéarité indique une distribution en fonction puissance. On retrouve ce genre de distribution dans un très grand nombre de réseaux, en particulier Facebook, comme l’a montré Wohlgemuth & Matache (2014).

Figure 3 : distribution du nombre d’amis sur des réseaux aléatoires simulés (Watts-Strogatz et Barbasi en rouge)

L’interprétation classique est que certaines personnes sont centrales dans le réseau, avec un très grand nombre de connexions. Cette propriété est très connue en marketing (on parlera alors d’effet de pair, « peer effect ») mais elle a des impacts aussi en gestion des risques, ou en santé publique. Chrisakis & Fowler (2010) ont ainsi montré que les épidémies de grippe peuvent être détectées près de deux semaines en avance, en surveillant l’infection dans un réseau social. En particulier, l’analyse de la santé des personnes centrales dans un réseau est « an ideal way to predict outbreaks, but detailed information doesn’t exist for most groups, and to produce it would be time-consuming and costly ». Pour revenir à l’exemple du score de crédit, si ce dernier se trouve être corrélé aux nombres d’amis, le paradoxe des amis rend dangereuse l’utilisation du score des amis comme indicateur du risque d’un individu !

L’importance de l’homophilie

Un autre trait important des réseaux est la notion d’homophilie, introduite en 2001 en sociologie par deux articles importants, correspondant à la tendance à être connecté à ses semblables. McPherson et al (2001) partait du principe que la similitude engendre la connexion, et par conséquent, les réseaux personnels des gens sont homogènes sur de nombreuses caractéristiques sociodémographiques, comportementales et intrapersonnelles. Moody (2001) étudiait ainsi les amitiés dans les cours de récréations à l’école primaire, aux États-Unis, et plus particulièrement les amitiés interraciales. Easley & Kleinberg (2010) présente ainsi de nombres conséquences de l’homophilie, allant de la constitution des tables lors de repas d’affaire, à l’attribution de crédit aux États-Unis. La mesure de l’homophilie revient à se demander, compte tenu de groupes préexistants (en fonction du genre, de l’âge, de la catégorie socioprofessionnelle, etc) comment se répartissent les liaisons, entre les groupes, ou à intérieur des groupes.

Figure 4 : faible homophilie (en haut) et forte homophilie (en bas).

Dans un contexte d’assurance, un actuaire cherche à créer des classes tarifaires, des groupes homogènes en termes de risques, suivant des variables explicatives (les variables dites tarifaires). Les personnes qui habitent au même endroit, qui conduisent les mêmes types de véhicules, et qui ont les mêmes caractéristiques, auront de fortes chances d’être dans la même classe. Mais si l’homophilie existe dans une population, un groupe tarifaire pourrait peut-être s’observer sur réseau d’amis. Pourquoi ne pas alors envisager de créer des groupes au sein d’un réseau ?

Utiliser les réseaux en assurance

Dans cet esprit, en 2010, Friendsurance a été lancé en Allemagne et compte en 2018 plus de 100000 assurés[i]. En France, une courte expérience d’assurance collaborative avait été lancée en 2015, avec Inspeer, proposant de mutualiser avec ses proches les franchises d’assurance dommage (en assurance auto ou habitation) entre amis. Ces types d’assurances collaboratives, parfois appelés assurance pair à pair – ou « peer-to-peer insurance » –  reposent sur la constitution de petits groupes par un courtier. Une partie des primes d’assurance versées est versée à un fonds collectif, l’autre partie à une compagnie d’assurance tierce. Les dommages mineurs subis par le preneur d’assurance sont d’abord pris en charge par ce fonds de groupe. Pour les sinistres dépassant la franchise, il est fait appel à l’assureur habituel. Un groupe peut être constitué par les assurés, formant un réseau social un peu comme Facebook. Dans ce modèle, la seule exigence est que tous les membres du groupe doivent avoir le même type d’assurance (par exemple une assurance responsabilité civile avec une assurance des frais juridiques).

Comme le notait Schiller (2013), ce type de mécanisme a beaucoup de vertus, la première étant de diminuer les coûts, et le risque de fraude. On n’a en effet pas tendance à tricher sur le coût d’un sinistre lorsque le risque est porté par des membres de la famille, ou des amis. L’anonymat de la mutualité qui existe dans la loi des grands nombres disparait. Mais n’est-on pas en train de réinventer la version 2.0. des associations tontinières, avec le retour en force de la mutualisation des risques au sein de communautés soudées ?

Références

Joshua Angrist. The perils of peer effects. Labor Economics, 30, 98-108, 2014

Zakaria Babutsidze. Positions of French and German Banks in European interbank lending network. OFCE, Mars 2012.

Abhijit Banerjee, Arun Chandrasekhar, Esther Duflo & Matthew Jackson. Diffusion of Microfinance. Science, 341,

Smriti Bhagat, Moira Burke, Carlos Diuk, Ismail Onur Filiz & Sergey Edunov. Three and a half degrees of separation. Facebook Research, 2016.

Ananya Bhattacharya. Facebook patent: Your friends could help you get a loan – or not. 4 août 2015, CNN,

Nicholas Christakis & James Fowler Social Network Sensors for Early Detection of Contagious Outbreaks. PLoS ONE. 5 (9): e12948. arXiv:1004.4792 2015

David Easley & Jon Kleinberg. Networks, Crowds, and Markets. Cambridge University Press. 2010.

Scott Feld. Why your friends have more friends than you do, American Journal of Sociology, 96 (6): 1464–1477, 1991.

Matthew Jackson. Social and Economic Networks. Princeton University Press, 2010.

Jure Leskovec & Eric Horvitz. Planetary-Scale Views on a Large Instant-Messaging Network. Microsoft Research, 2008.

Miller McPherson, Lynn Smith-Lovin & James Cook. Birds of a Feather: Homophily in Social Networks. Annual Review of Sociology. 27: 415–444, 2001.

James Moody. Race, School Integration, and Friendship Segregation in America. American Journal of Sociology, 107 (3): 679-716, 2001.

Wesley Perkins, Michael Haines & Richard Rice. Misperceiving the college drinking norm and related problems: a nationwide study of exposure to prevention information, perceived norms and student alcohol misuse. Journal of Studies on Alcohol 66 (4) : 470-478, 2005.

Ben Schiller. A Social Network For Insurance That Cuts Costs And Reduces Fraud. Fast Company, October 2013,

Brad Walker. How Peer-to-Peer Companies Are Transforming the Insurance Sector. The Street, Avril 2016,

Jason Wohlgemuth & Mihaela Matache. Small-World Properties of Facebook Group

Networks. Complex Systems, 23 (2014).

[i] Les données complètes sont en ligne sur https://snap.stanford.edu/data/email-Enron.html

[ii] https://www.friendsurance.com/ et https://www.inspeer.me/ respectivement

Les données actuarielles des assureurs, un trésor pour la connaissance client ?

Dans le supplément des Échos, publication d’une entrevue datant du printemps dernier.

Quels sont les typologies de données utilisées par les assureurs ?

Les actuaires, ces spécialistes des données et des statistiques chez les assureurs, utilisent les données clients internes, à savoir celles que les assurés leur fournissent, notamment au moment de la souscription de contrat, sur leur véhicule, leur lieu d’habitation, etc. Ils peuvent aussi “piocher” ailleurs, dans la cote argus par exemple, pour estimer le montant maximal d’un sinistre ; dans l’historique des inondations ; les données de cambriolage ; les données INSEE ou encore la météo. À partir de toutes ces données, les actuaires vont créer des modèles afin de prédire des scores d’accidents, de fraudes, de résiliation…

À quoi sert l’analyse des données actuarielles?

Ces données permettent surtout de segmenter le portefeuille, de trouver les bonnes variables tarifaires, pour créer des classes tarifaires en assurance auto ou habitation par exemple. Elles peuvent aussi être utilisées pour trouver le juste prix pour couvrir de nouveaux risques. Elles permettent enfin de répondre aux nouvelles demandes des assurés. Avant, avec l’assurance auto par exemple, lorsque votre voiture tombait en panne, on vous remboursait les frais et ça suffisait. Maintenant, les clients veulent être hébergés, disposer d’une voiture de remplacement… C’est à l’actuaire d’évaluer le coût de ces nouveaux services pour les inclure dans les différentes primes proposées par l’assureur.

« Il y a d’un côté l’approche marketing, qui va chercher à individualiser pour proposer des offres personnalisées ; de l’autre celle de l’actuaire, qui va chercher à trouver des catégories pertinentes, en mutualisant. »

a suivre…

Données Personnelles et Assurance

Depuis plusieurs semaines (pour ne pas dire plusieurs mois), je discute avec Delphine Cocteau-Senn et Rodolphe Bigot (dialogue de juristes avec un actuaire) sur la protection des données personnelles en assurance. Tout avait commencé il y 15 mois, avec une table ronde organisée à Amiens, avec un resto la veille, les discussions pendant la table ronde, puis des dizaines et des dizaines d’échanges de messages, de coup de fils, de restos…. Rodolphe Bigot m’avait sollicité pour une autre table ronde sur le droit de l’assurance à Caen, et de mon côté, j’avais demandé à Delphine Cocteau-Senn d’intervenir dans la formation Actuariat – Data Science de l’Institut des Actuaires, depuis l’an dernier. Bref, on a eu de très nombreux échanges, et quand on voit les prises de positions ici et là (pour ne pas parler de brassage d’air), on s’était dit que nos discussions pourraient éclairer les débats qui agitent actuaires et juristes.

C’était d’autant plus intéressant, je pense, qu’on a vraiment appris, mutuellement, de ces échanges. Echanges entre un actuaire et des juristes, tous les trois universitaires. Depuis plusieurs semaines, on tente de mettre par écrit ces échanges, et le résultat est aujourd’hui en ligne. La forme retenue est une présentation juridique de plusieurs points (en lien avec des notions évoquées dans le Réglement Général sur la Protection des Données – n° 2016/679), quelques questions actuarielles, et une ébauche de réponse. On a longuement hésité sur la forme, mais on a gardé la forme de la discussion, probablement plus agréable à lire qu’un texte écrit à trois. Je mets ici l’introduction (le document complet est en ligne)

Continue reading Données Personnelles et Assurance

Dossier Assurance sur Variances.eu

Sur variances.eu (le blog d’ENSAE Almuni) commence un dossier sur l’assurance, avec un premier billet sur l’assurance et le défi des taux d’intérêt bas, par Bernard Delas, vice-président de l’Autorité de contrôle prudentiel et de résolution (ACPR).

Les principales économies européennes connaissent depuis la crise des subprimes une baisse ininterrompue des taux d’intérêt. Ils ont atteint, au cours des cinq dernières années, des niveaux qu’ils n’avaient jamais connus jusqu’alors, inférieurs au taux zéro. Un tel scénario n’avait pas été envisagé par la réglementation post-crise des institutions et des marchés financiers. Ces taux d’intérêts, s’ils sont à court terme bénéfiques pour le financement des activités marchandes et industrielles, posent en revanche des défis inédits non seulement aux fonds d’investissement, aux banques et aux assureurs mais également  aux autorités de contrôle des institutions financières chargées de veiller à la préservation de la stabilité du système financier et, tout particulièrement depuis la crise, à la protection des consommateurs et des épargnants.

[à suivre] Et pour l’occasion – j’en profite aussi avec mon premier cours d’actuariat qui commence demain, à l’ENSAE, pour remettre quelques statistiques sur les ENSAE qui deviennent actuaires

“Ethics in Quantitative Finance”

Just before going to the workshop on dependencies in finance and insurance, Tim Johnson (also known as @TCJUK on Twitter), researcher at Heriot-Watt University in Edinburgh and blogger on http://magic-maths-money.blogspot, sent me a copy of his manuscript entitled Ethics in Quantitative Finance: a pragmatic theory of markets. While opening the book, we think of Peter L. Bernstein, his masterpieces Capital Ideas (or the later Capital Ideas Evolving) as well as Against the Gods. But Tim’s book is quite different.  This book is not really about finance, but about financial valuation and actuarial science. We can clearly see the deep interactions between financial mathematics and actuarial science. About uncertainy, prices and probabilities. And all those topics are embeded with a philosophical perspective

the argument is presented that financial markets are radically uncertain environments, where correspondence theories of truth are meaningless since there are no matters of fact about an uncertain financial future. In the face of this uncertainty, markets are places where “the opinion which is fated to be ultimately agreed to by all who investigate” is sought and opinions are expressed through asset prices. This implies that markets are centres of communicative action and money is behaving as a language. Using Jürgen Habermas’ analysis, this implies that market prices ‒ statements of opinions ‒ must satisfy objective, subjective and social truth criteria. The argument presented is that reciprocity guarantees the objective truth, sincerity guarantees the subjective truth and charity guarantees the rightness of a price. This explains why reciprocity is embedded in financial mathematics.

Continue reading “Ethics in Quantitative Finance”

C’est normal ! (partie 1) et si la normalité n’existait pas?

Il y a quelques semaines, je discutais avec une collègue juriste (alors que j’essayais de glaner des statistiques juridiques), et alors que l’on parlait alors de la lenteur de l’instruction, ou de l’inversion de la charge de la preuve (je ne me souviens plus) j’ai été supris qu’elle me dise que “c’est normal“. Je sais qu’un juriste et un statisticien ne donnent pas forcément le même sens aux mots, et cette phrase m’a dérangé parce qu’une situation qui pourrait sembler “normale” (car observée régulièrement) n’est pas pour autant “juste” (c’est le paradoxe is/ought de Hume, mais j’en parlerais dans un autre billet).

Le point de départ est de comprendre ce qu’est la normalité “empirique”, telle qu’observée dans une population. Ce qui pourrait être “normal” pour un statisticien. Pour commencer, je pensais revenir sur un exemple raconté dans The End of Average de Todd Rose, qui essaye de montrer, exemples à l’appui, que l’homme moyen n’existe pas.

  • L’homme moyen de Quételet

Au XIXème siècle, si plusieurs astronomes mesuraient la vitesse d’un même objet céleste, ils obtenaient (souvent) plusieurs mesures différentes. Pour savoir “laquelle” utiliser dans leurs calculs, l’idée d’utiliser “la méthode des moyennes” s’est rapidement imposée – comme le rappelle Stahl (2006), et surtout Sheynin (1973) – cette “moyenne” ayant une précision plus grande que n’importe quelle autre grandeur (ou dirait aujourd’hui “statistique”).

Adolphe Quetelet fut, semble-t-il, le premier à appliquer ce calcul de moyennes à des mesures humaines, introduisant son fameux concept d'”homme moyen”. Comme j’en avais parlé dans un précédant billet, la moyenne est une grande particulière, dont le sens n’est pas forcément clair. Si on définit la moyenne à l’aide d’une minimisation d’erreur quadratique, on a une interprétation en terme de prévision (on retrouve ici la notion de mesure elicitable dont je parlais dans mon dernier cours): la taille moyenne est la taille de “devrait” mesurer une personne tirée au hasard (à une variation aléatoire – et imprévisible – près). En 1846 dans une lettre (publiée dans l’ouvrage Lettres sur la théorie des probabilités, appliquée aux sciences morales) Adolphe Quételet utilise l’image de la statue du gladiateur pour expliquer ce que peut être l’homme moyen.

  • L’interprétation de Francis Galton

Ce homme moyen a beaucoup plu à l’époque, en particulier au sein de l’école anglaise eugéniste, dirigée à l’époque par Francis Galton, même si ce dernier s’intéresse surtout aux déviations par rapport à cette norme (déviation vers le haut et déviation vers le bas). Comme le rappelle Bulmer (2004),the deviations from that average—upwards towards genius, and downwards towards stupidity—must follow the law that governs deviations from all true averages“. Les travaux de Galton ont visé à comprendre ces déviations. Si Florence Nightingale affirmait que “the Average Man is God’s Will“, Galton de son côté s’intéressait davantage au caractère héréditaire. Mais cet “homme moyen” a-t-il pour autant du sens ?

  • L’être humain moyen n’existe pas

Une anecdote intéressante est celle de deux statues, à Cleveland, celles de Norma et Normann. L’artiste Abram Belskie et l’obstetricien Robert Latou Dickinson ont réalisé ensemble ces statues, en 1943. La particularité est qu’aucun modèle n’a été représenté. En fait, il s’agissait de représenter une femme et un homme qui avaient les mensurations moyennes de l’époque.

Une fois ces statues réalisées, un concours a été organisé pour trouver qui ces statues pouvaient bien représenter. Plusieurs milliers de personnes de l’Ohio ont envoyé leurs mensurations, mais aucun ne correspondaient à celles des statues. Certes, plusieurs centaines avaient la même taille. Plusieurs centaines avaient le même tour de poitrine. Mais aucune n’avait toutes les bonnes mesures. Car comme l’explique Todd Rose, l’homme n’est pas unidimensionnel: c’est sur plusieurs dimensions qu’on le mesure.

Et chercher à le résumer en une grandeur unidimensionnel est beaucoup trop réducteur. C’est ce qu’il montre dans son livre sur les tests d’intelligence, par exemple, où un même QI peut être associé à deux personnes très différentes.

Pareil pour décider de recruter quelqu’un, se focaliser sur un seul indicateur n’a pas de sens. Le soucis quand on travaille dans un contexte multivarié, c’est que la moyenne perd de son sens. Pour reprendre le titre d’un billet mis en ligne voilà 6 mois, être moyen peut être extraordinaire.

  • La malédiction de la dimension (en grande dimension, l’espace est très vide…)

En fait, ce problème est bien connu par les statisticiens, sous le nom de “fléau de la dimension“. Prenons un exemple simple: supposons qu’une grandeur d’intérêt suive une loi normale https://latex.codecogs.com/gif.latex?\mathcal{N}(\mu,\sigma^2), par exemple le poids, la taille, le tour de poitrine, etc. On pourrait dire que la norme, c’est se trouver dans un intervalle https://latex.codecogs.com/gif.latex?[\mu\pm%201.5%20\cdot\sigma]. Si on a une loi normale, cette situation survient dans 85% des cas,

> (1-2*pnorm(-1.5))
[1] 0.8663856

Et 15% seront vues comme “anormales”. Elles peuvent être anormalement petites, ou anormalement grandes. C’est le dessin ci-dessous: on regarde ici seulement une dimension

On peut maintenant regarder deux dimensions, le poids et la taille, par exemple. La norme serait ici que dans les deux dimensions, on soit dans l’intervalle https://latex.codecogs.com/gif.latex?[\mu\pm%201.5%20\cdot\sigma]. Si les grandeurs sont indépendantes, la probabilité que les deux grandeurs soient “normale” est de 75%

> (1-2*pnorm(-1.5))^2
[1] 0.750624

En dimension deux, 75% des observations sont “normales”, et 25% sont “anormales”

En dimension 3, on passe à 65%

> (1-2*pnorm(-1.5))^3
[1] 0.6503298

pour 35% d’observations “anormales” (plus du tiers)

Etc. En dimension cinq, on passe en dessous de 50%

> (1-2*pnorm(-1.5))^5
[1] 0.4881532

autrement dit, être dans la norme dans les 5 dimensions, ce n’est plus le cas de la majorité. Et en dimension vingt, ceux qui sont “normaux” sont plutôt atypiques, avec une proportion de l’ordre de 5%,

> (1-2*pnorm(-1.5))^20
[1] 0.0567838

Bref, la normalité est un concept particulièrement étrange sur le plan empirique, car intuitivement associé à l’idée d’une majorité. Alors que ce n’est pas le cas, la normalité étant justement atypique.

Actuarial Pricing Game, with Reinsurance

The Third Actuarial Pricing Game is still open, the deadline for submission is still February 25th. As mentioned in the instructions, for those willing to play in a market where reinsurance is available, here are the prices offered by some reinsurance company.

As mentioned in the description, the price is per insurance policy, per year. Players should send me their premiums in a csv file, gross of reinsurance, and mention in the email that they want to purchase treaty (A) – for instance (and mention explicitly in the object of the email that they want to play in this specific market, where reinsurance is available).

Third Actuarial Pricing Game

With the support of ACTINFO Chair and the (French) Institute of Actuaries, our Third Actuarial Pricing Game starts today ! There is a toolbox file available online, with

  • a description of the game : the rules, the dates, and a description of the datasets
  • 3 datasets : one underwriting and one claims databases, for year 0 (training data) and one underwriting dataset to enter the game

Anyone can play. Students from various programs around the world, as well as practitioners are welcome to play. It can be by teams, and there are no limit on the size. And there is no registration: to start playing, teams have to submit a dataset before the deadline (end of February), to pricing-game@univ-rennes1.fr.

Teasing for the Third Actuarial Pricing Game

We will launch within the next few days the Third Actuarial Pricing Game. The goal will be to replicate the behavior of insurance markets over time. There will be a first stage (January-February) where players will have to build a pricing model for motor insurance policies on the basis of 50,000 contracts observed Year 0 (characteristics of contracts, policyholder, and claims datasets). Proposals for premiums for Year 1 will have to be provided for the same contracts (updated for the age, the location, the car model, etc.). Players will use only the underwriting datasets to provide premiums for Year 1.

Between February 25th (deadline for the proposal for premiums) and March 1st, we will replicate an insurance market by creating competition among insurers (players), and by setting simple rules to match drivers and insurers (randomly among the cheapest at the beginning of the game, and then, as time goes by, we will add inertia, i.e. a tendency to stay with the same insurer if the latter is not too expensive). Once the drivers and the playees are matched we will provide claims information for the insured that each player has won. A new premium proposal must then be submitted at the end of Year 1 (i.e. at the end of March). Etc.

We will here try to replicate the a car insurance market over four years. The goal (for players) will be to maximize the profit of the insurance company over those four years. To make the game more realistic, insurers are assumed to have capital, and they can remain in the game only if their yearly loss ratio (claims over premium) is less than 150%. More information next week (rules, and training datasets).