Changement Climatique et Assurance

Je profite des vacances pour mettre en ligne un petit billet, version préliminaire un (court) article écrit avec Anne Eyrault-LoiselAlexis Hannart, et Julien Tomas, sur le changement climatique et l’assurance. Toutes les remarques et critiques sont bienvenues….

Impact du changement climatique

Munich Re (2014) a montré, au regard des données des 50 dernières années que la fréquence des catastrophes météorologiques et climatique ne cesse d’augmenter. Et les pertes assurées également, en grande partie à cause du développement de l’assurance. Selon Mills (2009), 16% des pertes liées aux conditions météorologiques étaient assurées dans les années 1980, pour atteindre 37,6% en moyenne au cours des 10 dernières années. La croissance économique, l’augmentation des richesses, l’industrialisation de zones vulnérables et la concentration des populations expliquent une grande partie de l’augmentation, comme le note Botzen et al. (2010).

Figure 1 : catastrophes météorologiques et climatiques dans le monde, modifié de Munich Re (2014).

Figure 2 : Montants des sinistres causés par les catastrophes météorologiques et climatiques dans le monde, modifié de Munich Re (2014).

Continue reading Changement Climatique et Assurance

Actuariat de l’Assurance Non-Vie #9

Pour le neuvième chapitre du cours d’actuariat de l’assurance non-vie à l’ENSAE, un petit fourre-tout avant d’attaquer la modélisation du passif, en parlant un peu de modèles Tweedie (modèle collectif vs. modèles individuels), de choix de variables, et de choix de modèles. Les slides sont en ligne (la version pdf téléchargeable est comme souvent plus complète que celle sur slideshare)

How Could Classification Trees Be So Fast on Categorical Variables?

I think that over the past months, I have been saying non-correct things about classification with categorical covariates. Because I never took time to look at it carefuly. Consider some simulated dataset, with a logistic regression,

> n=1e3
> set.seed(1)
> X1=runif(n)
> q=quantile(X1,(0:26)/26)
> q[1]=0
> X2=cut(X1,q,labels=LETTERS[1:26])
> p=exp(-.1+qnorm(2*(abs(.5-X1))))/(1+exp(-.1+qnorm(2*(abs(.5-X1)))))
> Y=rbinom(n,size=1,p)
> df=data.frame(X1=X1,X2=X2,p=p,Y=Y)

Here, we use some continuous covariate, except that is considered as not-observed. Instead, we have a categorical covariate with 26 categories. The (theoretical) relationship between the covariate and the probability is given below,

> vx1=seq(0,1,by=.001)
> vp=exp(-.1+qnorm(2*(abs(.5-vx1))))/(1+exp(-.1+qnorm(2*(abs(.5-vx1)))))
> plot(vx1,vp,type="l")

and the empirical probability, for each modality is

If we run a classification tree, we get

> library(rpart)
> tree=rpart(Y~X2,data=df)
> library(rpart.plot)
> prp(tree, type=2, extra=1)

To be more specific, the output is here

> tree
1) root 1000 249.90000 0.4900000  
  2) X2=F,G,H,I,J,K,L,M,N,O,P,Q,R 499 105.3 0.302
    4) X2=J,K,L,M,N,O,P,Q,R 346  65.12 0.25144  *
    5) X2=F,G,H,I 153  37.22876 0.4183007       *
  3) X2=A,B,C,D,E,S,T,U,V,W,X,Y,Z 501 109.61 0.67
    6) X2=B,C,D,E,S,T,U,V,W,X 385  90.38 0.623  *
    7) X2=A,Y,Z 116  14.50862 0.8534483         *

 

Note that it takes less than a second to get that output. So clearly, we did not look for all combinations between modalities. For the first node, there are like  possible groups, i.e.

> 67108864

It is big… not huge, but too big to try all combinations, since that’s only the first node, and we have to do it again on the two leaves, etc. Antoine (aka @ly_antoine) told me – while we were having a coffee after lunch today – the trick to get a fast algorithm, on categories. And as usual, the idea is very clever…

First, we need a function to compute Gini index

> gini=function(y,classe){
+    T=table(y,classe)
+    nx=apply(T,2,sum)
+    n=sum(T)
+    pxy=T/matrix(rep(nx,each=2),nrow=2)
+    omega=matrix(rep(nx,each=2),nrow=2)/n
+    g=-sum(omega*pxy*(1-pxy))
+    return(g)}

For the first node, the idea is very simple:

  • Compute empirical averages 
> cond_prob=aggregate(df$Y,by=list(df$X2),mean)
  • Then sort those values, ,
  • Based on that ordering, consider 
> Group_Letters=cond_prob[order(cond_prob$x),2]

  • Then consider (only)  possible partitions,

against 

> v_gini=rep(NA,26)
> for(v in 1:26){
+   CLASSE=df$X2 %in% Group_Letters[1:v]
+   v_gini[v]=gini(y=df$Y,classe=CLASSE)
+ }

If we plot them, we get

> plot(1:26,v_gini,type="b)

As for continuous variables, we seek for the maximum value, and then, we have our two groups,

> sort(Group_Letters[1:which.max(v_gini)])
 [1] F G H I J K L M N O P Q R

That’s exactly what we got with the tree function in R,

1) root 1000 249.90000 0.4900000  
  2) X2=F,G,H,I,J,K,L,M,N,O,P,Q,R 499 105.30 0.30

Now, consider the leaf on the left (for instance)

> sub_df=df[df$X2 %in% sort(Group_Letters[1:which.max(v_gini)]),]

Then use the same algorithm as before: sort the conditional means,

> cond_prob=aggregate(sub_df$Y,by=
+ list(sub_df$X2),mean)
> s_Group_Letters=cond_prob[order(cond_prob$x),2]

Then compute Gini indices based on groups obtained from that ordering,

> v_gini=rep(NA,length(sub_Group_Letters))
> for(v in 1:length(sub_Group_Letters)){
+   CLASSE=sub_df$X2 %in% s_Group_Letters[1:v]
+   v_gini[v]=gini(y=sub_df$Y,classe=CLASSE)
+ }

If we plot it, we get our two groups,

> plot(1:length(s_Group_Letters),v_gini,type="b")

And the first group is here

> sort(sub_Group_Letters[1:which.max(v_gini)])
[1] J K L M N O P Q R

Again, that’s exactly what we got with the R function

1) root 1000 249.90000 0.4900000  
  2) X2=F,G,H,I,J,K,L,M,N,O,P,Q,R 499 105.30 0.30
    4) X2=J,K,L,M,N,O,P,Q,R 346  65.12 0.25144  *

Clever, isn’t?

Inter-relationships in a matrix

Last week, I wanted to displaying inter-relationships between data in a matrix. My friend Fleur, from AXA, mentioned an interesting possible application, in car accidents. In car against car accidents, it might be interesting to see which parts of the cars were involved. On https://www.data.gouv.fr/fr/, we can find such a dataset, with a lot of information of car accident involving bodily injuries (in France, a police report is necessary, and all of them are reported in a big dataset… actually several dataset, with information of people involved, cars, locations, etc). For 2014 claims, the dataset is

> base = read.csv("https://www.data.gouv.fr/s/resources/base-de-donnees-accidents-corporels-de-la-circulation-sur-6-annees/20150806-153355/vehicules_2014.csv")

Let us keep only claims involving two vehicules,

> T=table(base$Num_Acc)
> idx=names(T)[which(T==2)]

For 2014, we have 32,222 claims.

> length(idx)
[1] 32222

In this dataset, we have information about where cars were hit,

plus ‘9’ for multiple hot (in rollover accidents) and ‘0’ should be missing information.

> nom=c("NA","Front","Front R",'Front L',"Back","Back R","Back L","Side R","Side L","Multiple")

Now, we simply have to go through our dataset, and get the matrix. My first idea was to get a symmetric one,

> B=base[base$Num_Acc %in% idx,]  
> B=B[order(B$Num_Acc),]
> M=matrix(0,10,10)
> for(i in seq(1,nrow(B),by=2)){
+   a=B$choc[i]+1
+   b=B$choc[i+1]+1
+   M[a,b]=M[a,b]+1
+   M[b,a]=M[b,a]+1
+ }
> rownames(M)=nom
> colnames(M)=nom

The problem, when we ask for a symmetric chord diagram, is that we cannot have Front – Front claims (since values on the diagonal are removed)

> library(circlize)
> chordDiagramFromMatrix(M,symmetric=TRUE)

So let’s pretend that there could be some possible distinction in the dataset, between the first and the second row. Like the first one is the ‘responsible’ driver. Or like, for insurer, the first one is your insured. Just to avoid this symmetry problem

> M=matrix(0,10,10)
> for(i in seq(1,nrow(B),by=2)){
+   a=B$choc[i]+1
+   b=B$choc[i+1]+1
+ M[a,b]=M[a,b]+1
+ }
> rownames(M)=paste("A",nom,sep=" ")
> colnames(M)=paste("B",nom,sep=" ")

If we visualize the chord diagram, this time it is more complex to analyze,

> chordDiagram(M)

Below we have the first row (say our driver, letter A) and on top, the second row (say the other driver, letter B),

In bodily injury claims, we observe a large proportion of Front – Front claims, as well as Front – Back. And as expected Back-Back are not that common….