Category Archives: ACT2040

Réassurrance

Mercredi aura lieu le dernier cours d’actuariat IARD.

Parmi les compléments, Introduction à la réassurance, publié par Swiss Re, ou ainsi que quelques documents plus techniques, comme The Pareto model in property reinsurance , Exposure rating, ou Designing property reinsurance programmes encore Introduction to reinsurance accounting. Plusieurs réassureurs (et courtiers en réassurance) publient des études techniques sur leurs sites, http://swissre.com/http://munichre.com/, http://aon.com/, http://scor.com/ ou encore http://guycarp.com/. Sinon je renvois aux notes de cours de Peter Antal, quantitative methods in reinsurance.

Les transparents sont en ligne ici,

Sur les sinistres les plus chers, pour les compagnies d’assurance et de réassurance, http://businessinsider.com/… donne le classement suivant en dollars de 2010 (on pourra aussi consulter http://media.swissre.com/…)
  1. Hurricane Katrina (US, Bahamas, Cuba, Aug. 2005), $ 72.3 billion
  2. Tōhoku earthquake and tsunami (Japan, March 2011), $ 35 billion
  3. Hurricane Andrew (US, Bahamas, August 1992), $ 25 billion
  4. September 11 attacks (US) $ 23.1 billion
  5. Northridge earthquake (US) $ 20.6 billion
  6. Hurricane Ike (US, Haiti, Dominican Republic, Sept. 2005) $ 20.5 billion
  7. Hurricane Ivan (US, Barbados, Sept. 2004) $ 14.9 billion
  8. Hurrican Wilman (US, Mexico, Jamaica, Oct. 2005), $ 14 billion
  9. Hurricane Rita (US, Cuba, Sept. 2005) $ 11.3 billion
  10. Hurricane Charley (US, Cuba, Jamaica) $ 9.3. billion

A titre de comparaison, les chiffres d’affaires des plus gros réassureurs (prime émise en 2010) étaient, selon http://www.insurancenetworking.com/…

  1. Munich Reinsurance Company $ 31.3 billion
  2. Swiss Reinsurance Company Limited $ 24.7 billion
  3. Hannover Rueckversicherung AG $ 15.1 billion
  4. Berkshire Hathaway Inc. $ 14.4 billion
  5. Lloyd’s $ 13 billion
  6. SCOR S.E. $  8.8 billion
  7. Reinsurance Group of America Inc. $ 7.2 billion
  8. Allianz S.E. $ 5.7 billion
  9. PartnerRe Ltd. $ 4.9 billion
  10. Everest Re Group Ltd. $ 4.2 billion

Reserving with negative increments in triangles

A few months ago, I did published a post on negative values in triangles, and how to deal with them, when using a Poisson regression (the post was published in French). The idea was to use a translation technique:

  1. Fit a model not on https://latex.codecogs.com/gif.latex?Y_i‘s but on https://latex.codecogs.com/gif.latex?Y_i^{(k)}=Y_i+k, for some https://latex.codecogs.com/gif.latex?k\geq%200,
  2. Use that model to make predictions, and then translate those predictions, https://latex.codecogs.com/gif.latex?\widehat{Y}_i^{(k)}-k

This is what was done to get the following graph, where a Poisson regression was fitted. Black points are https://latex.codecogs.com/gif.latex?Y_i‘s while blue points are https://latex.codecogs.com/gif.latex?\widehat{Y}_i^{(k)}‘s, for some https://latex.codecogs.com/gif.latex?k\geq%200. We fit a model to get the blue prediction, and then translate it to get the red prediction (on the https://latex.codecogs.com/gif.latex?Y_i‘s).
http://freakonometrics.blog.free.fr/public/perso4/glm-translation.gif

In this example, there were no negative values, but it is possible to use it get a better understanding on the impact of this technique. The prediction, here, is the red line. And clearly, the value of https://latex.codecogs.com/gif.latex?k has an impact on the prediction (since we do not consider, here, a linear model: with a linear model, translating has not impact at all, except on the intercept).

The alternative mentioned in the previous post was to use this technique on several https://latex.codecogs.com/gif.latex?k‘s, and them interpolate

  1. For a given https://latex.codecogs.com/gif.latex?k, fit a model not on https://latex.codecogs.com/gif.latex?Y_i‘s but on https://latex.codecogs.com/gif.latex?Y_i^{(k)}=Y_i+k, use that model to make predictions, and then translate those predictions, https://latex.codecogs.com/gif.latex?\widehat{Y}_i^{(k)}-k.
  2. Do it for several https://latex.codecogs.com/gif.latex?k‘s.
  3. Use it to extrapolate when https://latex.codecogs.com/gif.latex?k is https://latex.codecogs.com/gif.latex?0 (which is the case we are interested in).

In the context of loss reserving, the idea is extremely simple. Consider a triangle with incremental payments

> source("https://perso.univ-rennes1.fr/arthur.charpentier/bases.R")
> Y=T=PAID
> n=ncol(T)
> Y[,2:n]=T[,2:n]-T[,1:(n-1)]   
> Y
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,] 3209 1163   39   17    7   21
[2,] 3367 1292   37   24   10   NA
[3,] 3871 1474   53   22   NA   NA
[4,] 4239 1678  103   NA   NA   NA
[5,] 4929 1865   NA   NA   NA   NA
[6,] 5217   NA   NA   NA   NA   NA

Now, we do not have negative values, here, but we can still see is translation techniques can be used. The benchmark is the Poisson regression, since we can run it :

> y=as.vector(as.matrix(Y))
> base=data.frame(y,ai=rep(2000:2005,n),bj=rep(0:(n-1),each=n))
> reg=glm(y~as.factor(ai)+as.factor(bj),data=base,family=poisson)

Here, the amount is reserve is the sum of predicted values in the lower part of the triangle,

> py=predict(reg,newdata=base,type="response")
> sum(py[is.na(base$y)])
[1] 2426.985

which is exactly Chain Ladder’s estimate.

Now, let us use a translation technique to compute the amount of reserves. The code will be

> decal=function(k){
+ reg=glm(y+k~as.factor(ai)+as.factor(bj),data=base,family=poisson)
+ py=predict(reg,newdata=base,type="response")
+ return(sum(py[is.na(base$y)]-k))

For instance, if we translate of +5, we would get

> decal(5)
[1] 2454.713

while a translation of +10 would return

> decal(10)
[1] 2482.29

Clearly, translations do have an impact on the estimation. Here, just to check, if we do not translate, we do have Chain Ladder’s estimate,

> decal(0)
[1] 2426.985

The idea mentioned in the previous post was to try several translations, and then extrapolate, to get the value in 0. Here, translations will give the following estimates

> K=10:20
> (V=Vectorize(decal)(K))
 [1] 2482.290 2487.788 2493.279 2498.765 2504.245 2509.719 2515.187 2520.649
 [9] 2526.106 2531.557 2537.001

We can plot those values, and run a regression

> plot(K,V,xlim=c(0,20),ylim=c(2425,2540))
> abline(h=decal(0),col="red",lty=2)

the dotted horizontal line is Chain Ladder. Now, let us extrapolate

> b=data.frame(K=K,D=V)
> rk=lm(D~K,data=b)
> predict(rk,newdata=data.frame(K=0))
       1 
2427.623

On has to admit that it is not that bad. But yesterday evening, Karim asked me why I did use a linear regression, for my extrapolation. And to be honest, I do not know. I mean, the only answer might be that points are almost on a straight line. So the first time I saw it, I was exited, and I ran a linear regression.

Now, let us see if we can do better. Because here, we do use a translation of +10 or +20 (which might be rather small). What if we use much larger values ? (because we might have large negative incremental values). With the following code, we try, each time 11 consecutive values, the smallest one going from 0 to 50,

> hausse=1:50; res=rep(NA,50)
> for(k in hausse){
+ VK=k:(10+k)
+ b=data.frame(K=VK,D=Vectorize(decal)(VK))
+ rk=lm(D~K,data=b)
+ res[k]=predict(rk,newdata=data.frame(K=0))
+ }     
> plot(hausse,res,type="l",col="red",ylim=c(2422,2440))
> abline(rk,col="blue")

Here, we compute reserves when extrapolations were done after 11 translations, from https://latex.codecogs.com/gif.latex?k to https://latex.codecogs.com/gif.latex?k+10.  With different values of https://latex.codecogs.com/gif.latex?k. The case where https://latex.codecogs.com/gif.latex?k is ten was the one mentioned above,

> res[hausse==10]
[1] 2427.623

Actually, it might also be possible to consider not 11 translations, but 26, from https://latex.codecogs.com/gif.latex?k to https://latex.codecogs.com/gif.latex?k+25. Here, we get

> hausse=1:50; res=rep(NA,50)
> for(k in hausse){
+ VK=k:(25+k)
+ b=data.frame(K=VK,D=Vectorize(decal)(VK))
+ rk=lm(D~K,data=b)
+ res[k]=predict(rk,newdata=data.frame(K=0))
+ }   
> lines(hausse,res,type="l",col="blue",lty=2)

We now have the dotted line

Here, it is getting worst. So let us keep here 11 translations. Perhaps, we can try something different. For instance a Poisson regression, with a log like (i.e. we consider an exponential extrapolation),

> hausse=1:50; res=rep(NA,50)
> for(k in hausse){
+ VK=k:(10+k)
+ b=data.frame(K=VK,D=Vectorize(decal)(VK))
+ rk=glm(D~K,data=b,family=poisson)
+ res[k]=predict(rk,newdata=data.frame(K=0),type="response")
+ }         
> lines(hausse,res,type="l",col="purple")

The purple line will be a Poisson model, with a log link. Perhaps we can try another link function, like a quadratic one

> hausse=1:50; res=rep(NA,50)
> for(k in hausse){
+ VK=k:(10+k)
+ b=data.frame(K=VK,D=Vectorize(decal)(VK))
+ rk=glm(D~K,data=b,family=poisson(link=
+ power(lambda = 2)))
+ res[k]=predict(rk,newdata=data.frame(K=0),type="response")
+ }     
> lines(hausse,res,type="l",col="orange")

That would be the orange line,

Here, we need a link function between identity (the linear model, the blue line) and the quadratic one (the orange one), for instance a power function 3/2,

> hausse=1:50; res=rep(NA,50)
> for(k in hausse){
+ VK=k:(10+k)
+ b=data.frame(K=VK,D=Vectorize(decal)(VK))
+ rk=glm(D~K,data=b,family=poisson(link=
+ power(lambda = 1.5)))
+ res[k]=predict(rk,newdata=data.frame(K=0),type="response")
+ }         
> lines(hausse,res,type="l",col="green")

Here, it looks like we can use that model for any kind of translation, from +10 till +50, even +100 ! But I do not have any intuition about the use of this power function…

Chain Ladder, avec R

Un billet rapide pour mettre en ligne des parties du code tapé en cours, mercredi dernier. On avait commencé par convertir la feuille du classeur excel en un fichier texte, pour faciliter la lecture,

> setwd("C:\\Users\\savsalledecours\\Desktop")
> triangle=read.table("exACT2040.csv",header=TRUE,sep=";")	
> triangle
  ANNEE   X0   X1   X2   X3   X4   X5
1  2000 3209 4372 4411 4428 4435 4456
2  2001 3367 4659 4696 4720 4730   NA
3  2002 3871 5345 5398 5420   NA   NA
4  2003 4239 5917 6020   NA   NA   NA
5  2004 4929 6794   NA   NA   NA   NA
6  2005 5217   NA   NA   NA   NA   NA

L’idée – quand on importe un triangle – est de récupérer une base sous la forme précédente, avec des valeurs manquantes dans la partie inférieure du triangle (on verra l’intérêt quand on fait une régression). On avait ensuite calculé les facteurs de transition, et en même temps complété le triangle,

> T=triangle[,2:7]
> rownames(T)=triangle$ANNEE
> T2=T
> n=ncol(T)
> L=rep(NA,n-1)
> for(j in 1:(n-1)){
+ L[j]=sum(T[1:(n-j),j+1])/sum(T[1:(n-j),j])
+ T2[(n-j+1):n,j+1]=L[j]*T2[(n-j+1):n,j]
+ }

Les facteurs de transition sont ici,

> L
[1] 1.380933 1.011433 1.004343 1.001858 1.004735

et le triangle complété

> T2
       X0       X1       X2       X3       X4       X5
2000 3209 4372.000 4411.000 4428.000 4435.000 4456.000
2001 3367 4659.000 4696.000 4720.000 4730.000 4752.397
2002 3871 5345.000 5398.000 5420.000 5430.072 5455.784
2003 4239 5917.000 6020.000 6046.147 6057.383 6086.065
2004 4929 6794.000 6871.672 6901.518 6914.344 6947.084
2005 5217 7204.327 7286.691 7318.339 7331.939 7366.656

Le montant de provision est ici en faisant la différence entre la charge ultime (dans la dernière colonne) et les derniers paiements observés (sur la seconde diagonale)

> CU=T2[,n]
> Pat=diag(as.matrix(T2[,n:1]))
> Ri=CU-Pat
> R=sum(Ri)

soit, numériquement

> R
[1] 2426.985

On avait alors vu que l’on pouvait calculer un tail factor, en supposant une décroissance exponentielle des facteurs de transition, et on rajoutait alors une colonne correspondant au montant ultime, par année d’accident,

> logL=log(L-1)
> t=1:5
> b=data.frame(logL,t)
> reg=lm(logL~t,data=b)
> logLp=predict(reg,newdata=data.frame(t=6:100))
> Lp=exp(logLp)+1
> Linf=prod(Lp)
> T3=T2
> T3$Xinf=T3$X5*Linf

On a ici

> T3
       X0       X1       X2       X3       X4       X5     Xinf
2000 3209 4372.000 4411.000 4428.000 4435.000 4456.000 4459.149
2001 3367 4659.000 4696.000 4720.000 4730.000 4752.397 4755.755
2002 3871 5345.000 5398.000 5420.000 5430.072 5455.784 5459.639
2003 4239 5917.000 6020.000 6046.147 6057.383 6086.065 6090.366
2004 4929 6794.000 6871.672 6901.518 6914.344 6947.084 6951.993
2005 5217 7204.327 7286.691 7318.339 7331.939 7366.656 7371.862

(je laisse reprendre le code pour calculer le montant de provisions). Enfin, on avait montré comment utiliser une régression pondérée, pour calculer les facteurs de transition,

> T4=as.matrix(T$X0,n,1)
> for(j in 1:(n-1)){
+ Y=T[,j+1]
+ X=T[,j]
+ base=data.frame(X,Y)
+ reg=lm(Y~0+X,weights=1/X)
+ T4=cbind(T4,
+ predict(reg,
+ newdata=data.frame(X=T4[,j]
+ )))
+ }

Ce qui donnait la même projection que la méthode Chain Ladder

> T4
  [,1]     [,2]     [,3]     [,4]     [,5]     [,6]
1 3209 4431.414 4482.076 4501.543 4509.909 4531.263
2 3367 4649.601 4702.758 4723.184 4731.961 4754.367
3 3871 5345.591 5406.705 5430.188 5440.279 5466.039
4 4239 5853.775 5920.698 5946.414 5957.464 5985.673
5 4929 6806.619 6884.435 6914.337 6927.186 6959.986
6 5217 7204.327 7286.691 7318.339 7331.939 7366.656

La suite mercredi prochain, même si on risque d’aller très vite sur la méthode de Mack (et les calculs d’erreur quadratique moyenne pour arriver à la régression de Poisson). A suivre donc…

Solvabilité et provisionnement

Mercredi, nous allons aborder en cours les aspects de solvabilité des compagnies d’assurance IARD. Plus particulièrement, nous parlerons des provisions pour sinistres à payer, ou “provision for claims outstanding (PCO)” selon la terminologie anglaise, i.e. “the estimated total cost of ultimate settlement of all claims incurred before the date of record, whether reported or not, less any amounts already paid out in respect thereof.” Je renvoie à la lecture de Le contrôle de la solvabilité des compagnies d’assurance  en ligne sur le site de l’OCDE, pour une vision globale des approches de ces provisions. La SOA avait publié un rapport en 2009, Comparison of Incurred But Not Reported IBNR Methods que j’encourage à lire.

Nous aborderons mercredi les triangles. Parmi les triangles que nous manipulerons

> source("https://perso.univ-rennes1.fr/arthur.charpentier/bases.R")

qui contient plusieurs fichiers, dont

> PAID
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,] 3209 4372 4411 4428 4435 4456
[2,] 3367 4659 4696 4720 4730   NA
[3,] 3871 5345 5338 5420   NA   NA
[4,] 4239 5917 6020   NA   NA   NA
[5,] 4929 6794   NA   NA   NA   NA
[6,] 5217   NA   NA   NA   NA   NA

ainsi que le triangle évoqué sur http://rworkingparty.wikidot.com/

> OthLiabData = read.csv("http://www.casact.org/research/reserve_data/othliab_pos.csv",header=TRUE, sep=",")
> library(ChainLadder)
> OL = SumData=ddply(OthLiabData,.(AccidentYear,DevelopmentYear,DevelopmentLag),summarise,IncurLoss=sum(IncurLoss_h1-BulkLoss_h1),
+ CumPaidLoss=sum(CumPaidLoss_h1), EarnedPremDIR=sum(EarnedPremDIR_h1))
> LossTri = as.triangle(OL, origin="AccidentYear",
+ dev = "DevelopmentLag", value="IncurLoss")
> Year = as.triangle(OL, origin="AccidentYear",
+ dev = "DevelopmentLag", value="DevelopmentYear")
> TRIANGLE=LossTri
> TRIANGLE[Year>1997]=NA
> TRIANGLE
      dev
origin      1      2      3      4      5      6      7      8      9     10
  1988 128747 195938 241180 283447 297402 308815 314126 317027 319135 319559
  1989 135147 208767 270979 304488 330066 339871 344742 347800 353245     NA
  1990 152400 238665 297495 348826 359413 364865 372436 372163     NA     NA
  1991 151812 266245 357430 400405 423172 442329 460713     NA     NA     NA
  1992 163737 269170 347469 381251 424810 451221     NA     NA     NA     NA
  1993 187756 358573 431410 476674 504667     NA     NA     NA     NA     NA
  1994 210590 351270 486947 581599     NA     NA     NA     NA     NA     NA
  1995 213141 351363 444272     NA     NA     NA     NA     NA     NA     NA
  1996 237162 378987     NA     NA     NA     NA     NA     NA     NA     NA
  1997 220509     NA     NA     NA     NA     NA     NA     NA     NA     NA

Examen intra, éléments de correction

L’énoncé de l’examen intra est en pdf ici et comme annoncé par courriel, la correction de l’intra est dans le pdf en ligne. Comme personne ne semble en désaccord avec les réponses proposées, les notes seront mises en ligne très bientôt. Concertant les questions 18 et 19 quelques compléments d’explications (que je n’avais pas tapé dans le pdf). On avait vu que l’estimateur du maximum de vraisemblance pour une régression de Poisson était asymptotiquement Gaussien,

https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\beta}}_{P}\sim\mathcal{N}(\boldsymbol{\beta},V_\infty(\widehat{\boldsymbol{\beta}}_{P}))

(asymptotiquement) avec

https://latex.codecogs.com/gif.latex?V_\infty(\widehat{\boldsymbol{\beta}}_{P})=\left(\sum_{i=1}^n%20\widehat%20Y_i%20\boldsymbol{X}_i\boldsymbol{X}_i%27\right)^{-1}

Quand on a une régression de type binomiale négative, si on note de manière très générale https://latex.codecogs.com/gif.latex?\omega_i=\text{Var}(Y_i|\boldsymbol{X}_i) (on avait vu en cours qu’il existait plusieurs spécifications possibles pour cette variance conditionnelle). Dans ce cas,

https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\beta}}_{BN}\sim\mathcal{N}(\boldsymbol{\beta},V_\infty(\widehat{\boldsymbol{\beta}}_{BN}))

avec

https://latex.codecogs.com/gif.latex?V_\infty(\widehat{\boldsymbol{\beta}}_{P})=\left(\sum_{i=1}^n%20\widehat%20Y_i%20\boldsymbol{X}_i\boldsymbol{X}_i%27\right)^{-1}\left[\sum_{i=1}^n%20\omega_i%20\boldsymbol{X}_i\boldsymbol{X}_i\right]\left(\sum_{i=1}^n%20\widehat%20Y_i%20\boldsymbol{X}_i\boldsymbol{X}_i%27\right)^{-1}

Bref, tout dépend fondamentalement de la spécification de la variance conditionnelle. Sous R, c’est la régression binomiale négative de type 1 qui est considérée, i.e.

https://latex.codecogs.com/gif.latex?\omega_i=\text{Var}(Y_i|\boldsymbol{X}_i)=\phi\cdot%20\mathbb{E}(Y_i|\boldsymbol{X}_i)=\phi%20\cdot%20\widehat{Y}_i

On toujours une relation de la forme

https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\beta}}_{QP}\sim\mathcal{N}(\boldsymbol{\beta},V_\infty(\widehat{\boldsymbol{\beta}}_{QP}))

avec (en simplifiant un peu)

https://latex.codecogs.com/gif.latex?V_\infty(\widehat{\boldsymbol{\beta}}_{QP})=\phi\cdot\left(\sum_{i=1}^n%20\widehat%20Y_i%20\boldsymbol{X}_i\boldsymbol{X}_i%27\right)^{-1}

aussi, on a

https://latex.codecogs.com/gif.latex?V_\infty(\widehat{\boldsymbol{\beta}}_{QP})=\phi\cdot%20V_\infty(\widehat{\boldsymbol{\beta}}_{P})

Mais comme annoncé en cours, des points étaient données pour ceux qui se contentaient d’affirmer que la variance des estimateurs était plus grande s’il y avait sur-dispersion.

Examen intra, régression logistique et de Poisson

L’examen intra du cours ACT2040 aura lieu mercredi matin, de 9:00 à 12:00. Aucun document autorisé, sauf les calculatrices (modèle standard, cf plan de cours), et les téléphones seront formellement interdits. Il y aura 34 questions portant sur la première partie du cours (jusqu’à la fin des modèles de comptage, sections 1 à 5 des transparents). 15 questions porteront sur la base décrite dans un précédant billet, sur le nombre de relations extra-conjugales. Il s’agira de décrire les sorties en ligne ici. Je laisse 36 heures pour prendre connaissance de ces sorties. Une version sera donnée lors de l’examen (imprimée 2 pages par feuille, comme dans la version en ligne: si quelqu’un a besoin d’un exemplaire imprimé plus gros, merci de me le faire savoir avant l’examen).

Triangles et provisionnement

La première partie du cours sur le provisionnement (calcul des provisions pour sinistres à payer) aura lieu dans 10 jours. Les transparents sont en ligne ici, et portent sur la construction des triangles de paiements. La méthode chain ladder (et la formalisation proposée par Thomas Mack) ainsi que les extensions seront présentées. La seconde partie portera sur les méthodes basées sur la régression de Poisson.

Multiple (smoothed) regression and portfolio exposure

Wednesday, in class, we’ve seen how to visualize a multiple regression model (with two continuous explanatory variables). Here, the goal is to predict the average cost of an insurance claim, using some covariates, e.g. the age of the driver, and the age of the car (recall that losses here are liability losses). The prediction obtained from a (standard) generalized linear model, with a log-link

> reg1=glm(cout~ageconducteur+agevehicule,data=base,family=Gamma(link="log"))

The code to visualize the predicted average cost is the following: first, we have to compute predictions for specific values,

> pred=function(x,y){
+ predict(reg,newdata=data.frame(ageconducteur=x,
+ agevehicule=y),type="response")

Then, we use this function to compute values on a grid,

> X=seq(20,80,by=5)
> Y=0:20
> Z=outer(X,Y,p)
> image(X,Y,Z,col=rev(heat.colors(101)))
> contour(X,Y,Z,add=TRUE,
+ levels=c(1400,1800,2000,2200,2400,2600,2800,3000,3200,4000,5000))

If we use factors, and not continuous variates (cut versions of those two variates),

> reg2=glm(cout~cut(ageconducteur,breaks=c(0,22,35,55,80,100))*
+               cut(agevehicule,breaks=c(-1,1,3,5,10,100)),
+ data=base,family=Gamma(link="log"))

(note that we consider the Cartesian product, so values are computed for each product of factors, age of the driver and age of the car) we obtain

Obviously, we’re missing something here: the most expensive class with one model is the cheapeast for the other one! Of course, it might come from our classes (that were chosen a bit randomly), but it might be interesting to use nonlinear functions of the ages. So, let us use splines to smooth those two variables,

> reg3=glm(cout~bs(ageconducteur)+bs(agevehicule),data=base,
+ family=Gamma(link="log"))

With additive smoothed functions, we obtained a symmetric graph (due to the additive property)

while with a bivariate spline

> library(mgcv)
+ reg4=gam(cout~s(ageconducteur,agevehicule),data=base,
+ family=Gamma(link="log"))

(for some odd reasons, I could not use – easily – a bivariate spline in the Generalized Linear Model, but it did work considering a Generalized Additive Model – which is, by no means additive now). We can identify here some regions where the average cost can be extremely expensive… But, as mentioned wednesday, one should keep in mind that some parts of the square above are not reached. More precisely, the distribution of the portfolio, as a function of those two covariates is the following

Thus, the proportion of young drivers driving a brand new car, and the proportion of old drivers driving a very old car is rather small… If the goal is to find niches, one should look at the prediction more carefully, but if the goal is to make that everyone gets an insurance cover, maybe we should allow that some drivers are under-priced (especially when are rare in the portfolio). And one should keep in mind that average costs are extremely sensitive to large losses, as discussed previously http://freakonometrics.hypotheses.org/3490 (and in class)

In the univariate case, I have migrated an old post, we I tried to reproduce (in R and in French) some standard graphs in the insurance industry: it is always interesting to visualize not only the prediction obtained from our models, but also the size of each class in the portfolio,

The post is online here http://freakonometrics.hypotheses.org/1224

Readings on IBNR and claims reserving

The second part of the course on nonlife insurance will be dedicated to IBNR and claims reserving techniques. The main reference is the textbook by Mario Wüthrich and Michael Merz (a preliminary version can be downloaded from http://actuaries.ch/…)

The first reference is Best Estimates for Reserves by by Glen Barnett and Ben Zehnwirth is available online http://casact.org/pubs/…. In 2004 , Ben Zehnwirth, Julie Sims and Mark Shapland published Will Your Next Reserve Increase Be Your Last, available on http://contingencies.org/janfeb04/…. Finally, on simulation based techniques, The Actuary published an article entitled about the bootstrap, http://insureware.com/Library/… For further readings, here are some articles, found in the CAS forums, the ASTIN conferences, etc,

Données pour la régression logistique, et de Poisson

Pour le cours de mercredi, deux petites bases, pour se pratiquer à modéliser des variable 0/1 ou une variable de comptage,

> base = read.table("http://freakonometrics.free.fr/base-glm-act2040.txt",
+ header=TRUE)

ou encore

> base = read.table("http://freakonometrics.free.fr/base-pratique-act2040.txt",
+ header=TRUE)

Sinon, une base plus complète pour faire de la tarification,

> BASEN=read.table("http://freakonometrics.free.fr/baseN.txt",header=TRUE,sep=";")
> BASEY=read.table("http://freakonometrics.free.fr/baseY.txt",header=TRUE,sep=";")
> head(BASEN)
ageconducteur agepermis sexeconducteur situationfamiliale  habitation zone
1            57        39              F             Celiba peri-urbain    8
2            54        35              H             Celiba      urbain    3
3            51        32              F             Celiba      urbain    1
4            53        35              H              Marie       rural    4
5            61        43              H              Marie      urbain    8
6            60        29              F              Marie peri-urbain    1
agevehicule proprietaire    payment  marque         poids     usage
1          12    locataire     Annuel  AUTRES     8.>3500kg PROMENADE
2          20     sans mrp Semestriel PEUGEOT 4.3100-3199kg PROMENADE
3           4     sans mrp     Annuel  RAPIDO     1.<2700kg PROMENADE
4           1     sans mrp     Annuel  AUTRES 3.3000-3099kg PROMENADE
5           1 proprietaire     Annuel    FIAT 6.3300-3399kg PROMENADE
6          10     sans mrp    Mensuel    FIAT     8.>3500kg PROMENADE
exposition nombre   voiture
1          1      0 Monospace
2          1      0   Berline
3          1      0  sans avp
4          1      0  sans avp
5          1      1 Monospace
6          1      0  sans avp

Parmi les variables, la description (sommaire) est la suivante,

  • ageconducteur: âge du conducteur principal du véhicule
  • agepermis: ancienneté du permis de conduire du conducteur principal du véhicule
  • sexeconducteur: sexe du conducteur principal (H ou F)
  • situationfamiliale: situation familiale du conducteur principal (“Celiba”, “Marie” ou “Veuf/Div”)
  • habitation: zone d’habitation du conducteur principal (“peri-urbain”, “rural” ou “urbain” )
  • zone: zone d’habitation (allant de 1 à 8)
  • agevehicule: age du véhicule
  • proprietaire: si le conducteur principal possède un contrat Habitation, son statut (“locataire” ou “proprietaire”)  Sinon “sans mrp”
  • payment:type de fractionnement de la prime d’assurance automobile (“Annuel”, “Mensuel” ou “Semestriel”)
  • marque: marque du véhicule
> levels(BASEN[,10])
[1] "ADRIA"       "AUTOSTAR"    "AUTRES"      "BURSTNER MOBIL"
[5] "CHALLENGER"  "CHAUSSON"    "CITROEN"     "FIAT"
[9] "FORD"        "HYMERMOBIL"  "MERCEDES"    "PEUGEOT"
[13] "PILOTE"     "RAPIDO"      "RENAULT"     "VOLKSWAGEN"
  • poids: classe de poids du véhicule
> levels(BASEN[,11])
[1] "1.<2700kg"    "2.2700-2999kg""3.3000-3099kg""4.3100-3199kg"
[5] "5.3200-3299kg""6.3300-3399kg""7.3400-3499kg""8.>3500kg"
  • usage: utilisation du véhicule principal (“PROMENADE” ou “TOUS_DEPLACEMENTS”)
  • exposition: exposition, en années
  • nombre: nombre d’accident responsabilité civile du conducteur principal, pendant l’année passée
  • cout: cout du sinistre
  • voiture: type de véhicule
> levels(BASEN[,15])
[1] "Berline"            "Break"              "Buggy"
[4] "Cabriolet"          "Combispace"         "Coup\xe9"
[7] "Coup\xe9 Cabriolet" "Jeep"               "Minibus"
[10] "Minispace"          "Monospace"         "sans avp"

La variable d’intérêt est ici le nombre d’accident,

> table(BASEN$nombre)

    0     1 
60155  3264

La base est un peu particulière – on en parlera en classe – les assurés ayant eu 0 ou 1 accident dans l’année.

Semaine de relâche et données de comptage

Comme annoncé en cours (pour ceux qui souhaitent profiter de la semaine de relâche pour se préparer) une partie de l’examen intra portera sur la base

> base=read.table("http://freakonometrics.free.fr/baseaffairs.txt",header=TRUE)
> tail(base)
    SEX AGE YEARMARRIAGE CHILDREN RELIGIOUS EDUCATION OCCUPATION SATISFACTION Y
596   1  47         15.0        1         3        16          4            2 7
597   1  22          1.5        1         1        12          2            5 1
598   0  32         10.0        1         2        18          5            4 6
599   1  32         10.0        1         2        17          6            5 2
600   1  22          7.0        1         3        18          6            2 2
601   0  32         15.0        1         3        14          1            5 1

Il s’agit d’une base construite à partir des données de l’article A Theory of Extramarital Affairs, de Ray Fair, paru en 1978 dabs Journal of Political Economy. La variable d’intérêt est (comme son nom l’indique) Y, le nombre d’aventures extra-conjugales pendant l’année passée, avec plusieurs variables explicatives

  • sex: 0 pour une femme, et 1 pour un homme
  • age: âge de la personne interrogée
  • yearmarriage: nombre d’années de mariage
  • children: 0 si la personne n’a pas d’enfants (avec son épouse) et 1 si elle en a
  • religious: degré de “religiosité”, entre 1 (anti-religieuse) à 5 (très religieuse)
  • education: nombre d’aéées d’éducation, 9=grade school, 12=high school, …, à 20=PhD
  • occupation: construit suivant l’échelle d’Hollingshead (cf http://cba.uah.edu/berkowd/….)
    • Higher executives of large concerns, proprietors, and major professionals (1)
    • Business managers, proprietors of medium-sized businesses, and lesser professionals (2)
    • Administrative personnel, owners of small businesses, and minor professionals (3)
    • Clerical and sales workers, technicians, and owners of little businesses (4)
    • Skilled manual employees (5)
    • Machine operators and semiskilled employees (6)
    • Unskilled employees (7)
  • satisfaction: perception de son mariage, de très mécontente (1) à très contente (5)

Je ne répondrais pas, a priori, aux questions portant sur ces données. Bon courage, et bonne semaine de relâche.

Further readings on GLMs and ratemaking

Some articles found in Actuarial journal, on ratemarking,

and in the CAS forums, and Astin conference papers

Modélisation des coûts individuels en tarification

Avant de terminer le cours sur la tarification, on va parler de la modélisation des coûts individuels. On parlera de lois Gamma et de lois lognormales (sur cette dernière, je suggère de relire ce qui avait été dit dans le cours de modèles de régression sur les modèles log-linéaires, rappelé dans un court billet publié à l’automne). On parlera aussi de mélanges de lois, et de lois multinomiales. Les transparents sont en ligne ici,

Pour aller plus loin, il y a l’article de Fu & Moncher (2004) sur la comparaison Gamma versus lognormale, http://casact.org/… ou Holler, Sommer & Trahair (1999) http://casact.org/… qui proposait un état de l’art, il y a une quinzaine d’années. Sinon, je recommande la lecture du Practitioner’s Guide to Generalized Linear Models, en ligne sur http://casact.org/….

Modeling individual losses with mixtures

Usually, the sentence that I keep saying in my regression classes is “please, look at your data“. In our previous post, we’ve been playing like most econometricians: we did not look at the data. Actually, if we look at the distribution of individual losses, in the dataset, we see the following,

> n=nrow(couts)
> plot(sort(couts$cout),(1:n)/(n+1),xlim=c(0,10000),type="s",lwd=2,col="green")

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.10.26.png

It looks like there are fixed costs claims in our database. How do we deal with it in the standard case (e.g. in Loss Models textbook) ? We can use a mixture of – at least – three distributions here,

with

  • a distribution for small claims, https://latex.codecogs.com/gif.latex?{\color{Blue}%20f_1(}\cdot{\color{Blue}%20)}, e.g. an exponential distribution
  • a Dirac mass in https://latex.codecogs.com/gif.latex?{\color{Magenta}%20\kappa}, i.e. https://latex.codecogs.com/gif.latex?{\color{Magenta}%20\delta_{\kappa}(}\cdot{\color{Magenta}%20)}
  • a distribution for larger claims, https://latex.codecogs.com/gif.latex?{\color{Red}%20f_3(}\cdot{\color{Red}%20)}, e.g. a Gamma, or a lognormal, distribution
>  I1=which(couts$cout<1120)
>  I2=which((couts$cout>=1120)&(couts$cout<1220))
>  I3=which(couts$cout>=1220)
>  (p1=length(I1)/nrow(couts))
[1] 0.3284823
>  (p2=length(I2)/nrow(couts))
[1] 0.4152807
>  (p3=length(I3)/nrow(couts))
[1] 0.256237
>  X=couts$cout
>  (kappa=mean(X[I2]))
[1] 1171.998
>  X0=X[I3]-kappa
>  u=seq(0,10000,by=20)
>  F1=pexp(u,1/mean(X[I1]))
>  F2= (u>kappa)
>  F3=plnorm(u-kappa,mean(log(X0)),sd(log(X0))) * (u>kappa)
>  F=F1*p1+F2*p2+F3*p3
>  lines(u,F)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.13.43.png

In our previous post, we’ve discussed the idea that all parameters might be related to some covariates, i.e.

https://latex.codecogs.com/gif.latex?f(y|\boldsymbol{X})%20=%20p_1(\boldsymbol{X})%20{\color{Blue}%20f_1(}y|\boldsymbol{X}{\color{Blue}%20)}%20+%20p_2(\boldsymbol{X})%20{\color{Magenta}%20\delta_{\kappa}(}y{\color{Magenta}%20)}%20+%20p_3(\boldsymbol{X})%20{\color{Red}%20f_3(}y|\boldsymbol{X}{\color{Red}%20)}

which yield the following premium model,

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s_1)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s_1|\boldsymbol{X})}_{D}}}}\\+{\color{Purple}%20{{\underbrace{\mathbb{E}(Y|Y\in(%20s_1,s_2],%20\boldsymbol{X})%20}_{B}}\cdot%20{\underbrace{\mathbb{P}(Y\in(%20s_1,s_2]|%20\boldsymbol{X})}_{D}}}}\\+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s_2,%20\boldsymbol{X})%20}_{C}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s_2|%20\boldsymbol{X})}_{D}}}}

For the https://latex.codecogs.com/gif.latex?{\color{Blue}%20A}https://latex.codecogs.com/gif.latex?{\color{Magenta}%20B} and https://latex.codecogs.com/gif.latex?{\color{Red}%20C} terms, that’s easy, we can use standard models we’ve seen in the course. For the probability, we should use a multinomial model. Recall that for the logistic regression model, if https://latex.codecogs.com/gif.latex?(\pi,1-\pi)=(\pi_1,\pi_2), then

https://latex.codecogs.com/gif.latex?\log%20\frac{\pi}{1-\pi}=\log%20\frac{\pi_1}{\pi_2}%20=\boldsymbol{X}%27\boldsymbol{\beta}

i.e.

https://latex.codecogs.com/gif.latex?\pi_1%20=%20\frac{\exp(\boldsymbol{X}%27\boldsymbol{\beta})}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta})}

and

https://latex.codecogs.com/gif.latex?\pi_2%20=%20\frac{1}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta})}

To derive a multivariate extension, write

https://latex.codecogs.com/gif.latex?\pi_1%20=%20\frac{\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}

https://latex.codecogs.com/gif.latex?\pi_2%20=%20\frac{\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}

and

https://latex.codecogs.com/gif.latex?\pi_3%20=%20\frac{1}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}

Again, maximum likelihood techniques can be used, since

https://latex.codecogs.com/gif.latex?\mathcal{L}(\boldsymbol{\pi},\boldsymbol{y})\propto%20\prod_{i=1}^n%20\prod_{j=1}^3%20\pi_{i,j}^{Y_{i,j}}

where here, variable https://latex.codecogs.com/gif.latex?Y_{i}  – which take three levels – is splitted in three indicators (like any categorical explanatory variables in standard regression model). Thus,

https://latex.codecogs.com/gif.latex?\log%20\mathcal{L}(\boldsymbol{\beta},\boldsymbol{y})\propto%20\sum_{i=1}^n%20\sum_{j=1}^2%20\left(Y_{i,j}%20\boldsymbol{X}_i%27\boldsymbol{\beta}_j\right)%20-%20n_i\log\left[1+1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)\right]

and, as for the logistic regression, then use Newton Raphson’ algorithm to compute numerically the maximum likelihood. In R, first we have to define the levels, e.g.

> seuils=c(0,1120,1220,1e+12)
> couts$tranches=cut(couts$cout,breaks=seuils,
+ labels=c("small","fixed","large"))
> head(couts,5)
  nocontrat    no garantie    cout exposition zone puissance agevehicule
1      1870 17219      1RC 1692.29       0.11    C         5           0
2      1963 16336      1RC  422.05       0.10    E         9           0
3      4263 17089      1RC  549.21       0.65    C        10           7
4      5181 17801      1RC  191.15       0.57    D         5           2
5      6375 17485      1RC 2031.77       0.47    B         7           4
  ageconducteur bonus marque carburant densite region tranches
1            52    50     12         E      73     13    large
2            78    50     12         E      72     13    small
3            27    76     12         D      52      5    small
4            26   100     12         D      83      0    small
5            46    50      6         E      11     13    large

Then, we can run a multinomial regression, from

> library(nnet)

using some selected covariates

> reg=multinom(tranches~ageconducteur+agevehicule+zone+carburant,data=couts)
# weights:  30 (18 variable)
initial  value 2113.730043 
iter  10 value 2063.326526
iter  20 value 2059.206691
final  value 2059.134802 
converged

The output is here

> summary(reg)
Call:
multinom(formula = tranches ~ ageconducteur + agevehicule + zone + 
    carburant, data = couts)

Coefficients:
      (Intercept) ageconducteur agevehicule      zoneB      zoneC
fixed  -0.2779176   0.012071029  0.01768260 0.05567183 -0.2126045
large  -0.7029836   0.008581459 -0.01426202 0.07608382  0.1007513
           zoneD      zoneE      zoneF   carburantE
fixed -0.1548064 -0.2000597 -0.8441011 -0.009224715
large  0.3434686  0.1803350 -0.1969320  0.039414682

Std. Errors:
      (Intercept) ageconducteur agevehicule     zoneB     zoneC     zoneD
fixed   0.2371936   0.003738456  0.01013892 0.2259144 0.1776762 0.1838344
large   0.2753840   0.004203217  0.01189342 0.2746457 0.2122819 0.2151504
          zoneE     zoneF carburantE
fixed 0.1830139 0.3377169  0.1106009
large 0.2160268 0.3624900  0.1243560

To visualize the impact of a covariate (one, only), one can use also spline functions

> library(splines)
> reg=multinom(tranches~agevehicule,data=couts)
# weights:  9 (4 variable)
initial  value 2113.730043 
final  value 2072.462863 
converged
> reg=multinom(tranches~bs(agevehicule),data=couts)
# weights:  15 (8 variable)
initial  value 2113.730043 
iter  10 value 2070.496939
iter  20 value 2069.787720
iter  30 value 2069.659958
final  value 2069.479535 
converged

For instance, if the covariate is the age of the car, we do have the following probabilities

> predict(reg,newdata=data.frame(agevehicule=5),type="probs")
    small     fixed     large 
0.3388947 0.3869228 0.2741825

and for all ages from 0 to 20,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.02.55.png

For instance, for new cars, the proportion of fixed costs is rather small (here in purple), and keeps increasing with the age of the car. If the covariate is the density of population in the area the driver lives, we do obtain the following probabilities

> reg=multinom(tranches~bs(densite),data=couts)
# weights:  15 (8 variable)
initial  value 2113.730043 
iter  10 value 2068.469825
final  value 2068.466349 
converged
> predict(reg,newdata=data.frame(densite=90),type="probs")
    small     fixed     large 
0.3484422 0.3473315 0.3042263

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.05.29.png

Based on those probabilities, it is then possible to derive the expected cost of a claims, given some covariates (e.g. the density). But first, define subsets of the whole dataset

> sbaseA=couts[couts$tranches=="small",]
> sbaseB=couts[couts$tranches=="fixed",]
> sbaseC=couts[couts$tranches=="large",]

with a threshold given by

> (k=mean(sousbaseB$cout))
[1] 1171.998

Then, let us run our four models,

> reg=multinom(tranches~bs(densite),data=couts)
> regA=glm(cout~bs(densite),data=sousbaseA,family=Gamma(link="log"))
> regB=glm(cout~1,data=sousbaseB,family=Gamma(link="log"))
> regC=glm((cout-k)~bs(densite),data=sousbaseC,family=Gamma(link="log"))

We can now compute predictions based on those models,

> nouveau=data.frame(densite=seq(10,100))
> proba=predict(reg,newdata=nouveau,type="probs")
> predA=predict(regA,newdata=nouveau,type="response")
> predB=predict(regB,newdata=nouveau,type="response")
> predC=predict(regC,newdata=nouveau,type="response")+k
> pred=cbind(predA,predB,predC)

To visualize the impact of each component on the premium, we can compute probabilities, are well as expected costs (given a cost in each subset),

> cbind(proba,pred)[seq(10,90,by=10),]
       small     fixed     large    predA    predB    predC
10 0.3344014 0.4241790 0.2414196 423.3746 1171.998 7135.904
20 0.3181240 0.4471869 0.2346892 428.2537 1171.998 6451.890
30 0.3076710 0.4626572 0.2296718 438.5509 1171.998 5499.030
40 0.3032872 0.4683247 0.2283881 451.4457 1171.998 4615.051
50 0.3052378 0.4620219 0.2327404 463.8545 1171.998 3961.994
60 0.3136136 0.4417057 0.2446807 472.3596 1171.998 3586.833
70 0.3279413 0.4056971 0.2663616 473.3719 1171.998 3513.601
80 0.3464842 0.3534126 0.3001032 463.5483 1171.998 3840.078
90 0.3652932 0.2868006 0.3479061 440.4925 1171.998 4912.379

Now, it is possible to plot those figures in a graph,

> barplot(t(proba*pred))
> abline(h=mean(couts$cout),lty=2)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-15-a%CC%80-11.50.47.png

(the dotted horizontal line is the average cost of a claim, in our dataset).

Visualizing overdispersion (with trees)

This week, we started to discuss overdispersion when modeling claims frequency. In my previous post, I discussed computations of empirical variances with different exposure. But I did use only one factor to compute classes. Of course, it is possible to use much more factors. For instance, using cartesian products of factors,

> X=as.factor(paste(sinistres$carburant,sinistres$zone,
+ cut(sinistres$ageconducteur,breaks=c(17,24,40,65,101))))
> E=sinistres$exposition
> Y=sinistres$nbre
> vm=vv=ve=rep(NA,length(levels(X)))
>   for(i in 1:length(levels(X))){
+  	   ve[i]=Ei=E[X==levels(X)[i]]
+  	   Yi=Y[X==levels(X)[i]]
+   vm[i]=meani=weighted.mean(Yi/Ei,Ei)    # moyenne 
+   vv[i]=variancei=sum((Yi-meani*Ei)^2)/sum(Ei)    # variance
+  cat("Class ",levels(X)[i],"average =",meani," variance =",variancei,"\n")
+ }
Class D A (17,24]  average = 0.06274415  variance = 0.06174966 
Class D A (24,40]  average = 0.07271905  variance = 0.07675049 
Class D A (40,65]  average = 0.05432262  variance = 0.06556844 
Class D A (65,101] average = 0.03026999  variance = 0.02960885 
Class D B (17,24]  average = 0.2383109   variance = 0.2442396 
Class D B (24,40]  average = 0.06662015  variance = 0.07121064 
Class D B (40,65]  average = 0.05551854  variance = 0.05543831 
Class D B (65,101] average = 0.0556386   variance = 0.0540786 
Class D C (17,24]  average = 0.1524552   variance = 0.1592623 
Class D C (24,40]  average = 0.0795852   variance = 0.09091435 
Class D C (40,65]  average = 0.07554481  variance = 0.08263404 
Class D C (65,101] average = 0.06936605  variance = 0.06684982 
Class D D (17,24]  average = 0.1584052   variance = 0.1552583 
Class D D (24,40]  average = 0.1079038   variance = 0.121747 
Class D D (40,65]  average = 0.06989518  variance = 0.07780811 
Class D D (65,101] average = 0.0470501   variance = 0.04575461 
Class D E (17,24]  average = 0.2007164   variance = 0.2647663 
Class D E (24,40]  average = 0.1121569   variance = 0.1172205 
Class D E (40,65]  average = 0.106563    variance = 0.1068348 
Class D E (65,101] average = 0.1572701   variance = 0.2126338 
Class D F (17,24]  average = 0.2314815   variance = 0.1616788 
Class D F (24,40]  average = 0.1690485   variance = 0.1443094 
Class D F (40,65]  average = 0.08496827  variance = 0.07914423 
Class D F (65,101] average = 0.1547769   variance = 0.1442915 
Class E A (17,24]  average = 0.1275345   variance = 0.1171678 
Class E A (24,40]  average = 0.04523504  variance = 0.04741449 
Class E A (40,65]  average = 0.05402834  variance = 0.05427582 
Class E A (65,101] average = 0.04176129  variance = 0.04539265 
Class E B (17,24]  average = 0.1114712   variance = 0.1059153 
Class E B (24,40]  average = 0.04211314  variance = 0.04068724 
Class E B (40,65]  average = 0.04987117  variance = 0.05096601 
Class E B (65,101] average = 0.03123003  variance = 0.03041192 
Class E C (17,24]  average = 0.1256302   variance = 0.1310862 
Class E C (24,40]  average = 0.05118006  variance = 0.05122782 
Class E C (40,65]  average = 0.05394576  variance = 0.05594004 
Class E C (65,101] average = 0.04570239  variance = 0.04422991 
Class E D (17,24]  average = 0.1777142   variance = 0.1917696 
Class E D (24,40]  average = 0.06293331  variance = 0.06738658 
Class E D (40,65]  average = 0.08532688  variance = 0.2378571 
Class E D (65,101] average = 0.05442916  variance = 0.05724951 
Class E E (17,24]  average = 0.1826558   variance = 0.2085505 
Class E E (24,40]  average = 0.07804062  variance = 0.09637156 
Class E E (40,65]  average = 0.08191469  variance = 0.08791804 
Class E E (65,101] average = 0.1017367   variance = 0.1141004 
Class E F (17,24]  average = 0           variance = 0 
Class E F (24,40]  average = 0.07731177  variance = 0.07415932 
Class E F (40,65]  average = 0.1081142   variance = 0.1074324 
Class E F (65,101] average = 0.09071118  variance = 0.1170159

Again, one can plot the variance against the average,

> plot(vm,vv,cex=sqrt(ve),col="grey",pch=19,
+ xlab="Empirical average",ylab="Empirical variance")
> points(vm,vv,cex=sqrt(ve))
> abline(a=0,b=1,lty=2)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-13.58.26.png

An alternative is to use a tree. The tree can be obtained from another variable (the insured had, or had not, a claim, during the period considered) but it should be rather close to the one we would like to model (the number of claims over the period considered). Here, I did use the whole database (with more that 600,000 lines)

> library(tree)
> T=tree((nombre>0)~as.factor(zone)+as.factor(puissance)+
+ as.factor(marque)+as.factor(carburant)+as.factor(region)+
+ agevehicule+ageconducteur,data=baseFREQ,
+ split =  "gini",minsize =25000)

The tree is the following

> plot(T)
> text(T)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-13.55.13.png

Now, each knot defines a class, and it is possible to use it to define a class. Which is supposed to be homogeneous.

> X=as.factor(T$where)
> E=sinistres$exposition
> Y=sinistres$nbre
> vm=vv=ve=rep(NA,length(levels(X)))
>   for(i in 1:length(levels(X))){
+  	   ve[i]=Ei=E[X==levels(X)[i]]
+  	   Yi=Y[X==levels(X)[i]]
+   vm[i]=meani=weighted.mean(Yi/Ei,Ei)    # moyenne 
+   vv[i]=variancei=sum((Yi-meani*Ei)^2)/sum(Ei)    # variance
+  cat("Class ",levels(X)[i],"average =",meani," variance =",variancei,"\n")
+  }
Class  6 average =   0.04010406  variance = 0.04424163 
Class  8 average =   0.05191127  variance = 0.05948133 
Class  9 average =   0.07442635  variance = 0.08694552 
Class  10 average =  0.4143646   variance = 0.4494002 
Class  11 average =  0.1917445   variance = 0.1744355 
Class  15 average =  0.04754595  variance = 0.05389675 
Class  20 average =  0.08129577  variance = 0.0906322 
Class  22 average =  0.05813419  variance = 0.07089811 
Class  23 average =  0.06123807  variance = 0.07010473 
Class  24 average =  0.06707301  variance = 0.07270995 
Class  25 average =  0.3164557   variance = 0.2026906 
Class  26 average =  0.08705041  variance = 0.108456 
Class  27 average =  0.06705214  variance = 0.07174673 
Class  30 average =  0.05292652  variance = 0.06127301 
Class  31 average =  0.07195285  variance = 0.08620593 
Class  32 average =  0.08133722  variance = 0.08960552 
Class  34 average =  0.1831559   variance = 0.2010849 
Class  39 average =  0.06173885  variance = 0.06573939 
Class  41 average =  0.07089419  variance = 0.07102932 
Class  44 average =  0.09426152  variance = 0.1032255 
Class  47 average =  0.03641669  variance = 0.03869702 
Class  49 average =  0.0506601   variance = 0.05089276 
Class  50 average =  0.06373107  variance = 0.06536792 
Class  51 average =  0.06762947  variance = 0.06926191 
Class  56 average =  0.06771764  variance = 0.07122379 
Class  57 average =  0.04949142  variance = 0.05086885 
Class  58 average =  0.2459016   variance = 0.2451116 
Class  59 average =  0.05996851  variance = 0.0615773 
Class  61 average =  0.07458053  variance = 0.0818608 
Class  63 average =  0.06203737  variance = 0.06249892 
Class  64 average =  0.07321618  variance = 0.07603106 
Class  66 average =  0.07332127  variance = 0.07262425 
Class  68 average =  0.07478147  variance = 0.07884597 
Class  70 average =  0.06566728  variance = 0.06749411 
Class  71 average =  0.09159605  variance = 0.09434413 
Class  75 average =  0.03228927  variance = 0.03403198 
Class  76 average =  0.04630848  variance = 0.04861813 
Class  78 average =  0.05342351  variance = 0.05626653 
Class  79 average =  0.05778622  variance = 0.05987139 
Class  80 average =  0.0374993   variance = 0.0385351 
Class  83 average =  0.06721729  variance = 0.07295168 
Class  86 average =  0.09888492  variance = 0.1131409 
Class  87 average =  0.1019186   variance = 0.2051122 
Class  88 average =  0.05281703  variance = 0.0635244 
Class  91 average =  0.08332136  variance = 0.09067632 
Class  96 average =  0.07682093  variance = 0.08144446 
Class  97 average =  0.0792268   variance = 0.08092019 
Class  99 average =  0.1019089   variance = 0.1072126 
Class  100 average = 0.1018262   variance = 0.1081117 
Class  101 average = 0.1106647   variance = 0.1151819 
Class  103 average = 0.08147644  variance = 0.08411685 
Class  104 average = 0.06456508  variance = 0.06801061 
Class  107 average = 0.1197225   variance = 0.1250056 
Class  108 average = 0.0924619   variance = 0.09845582 
Class  109 average = 0.1198932   variance = 0.1209162

Here, when ploting the empirical variance (per knot) against the empirial average of claims, we get

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.05.08.png

Here, we can identify classes where remaining heterogeneity.