Category Archives: Ratemaking

Large claims, and ratemaking

During the course, we have seen that it is natural to assume that not only the individual claims frequency can be explained by some covariates, but individual costs too. Of course, appropriate families should be considered to model the distribution of the cost https://latex.codecogs.com/gif.latex?Y, given some covariates https://latex.codecogs.com/gif.latex?\boldsymbol{X}.Here is the dataset we’ll use,

>  sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt",
+  header=TRUE,sep=";")
>  sinistres=sinistre[sinistre$garantie=="1RC",]
>  sinistres=sinistres[sinistres$cout>0,]
>  contrat=read.table("http://freakonometrics.free.fr/contractACT2040.txt",
+  header=TRUE,sep=";")
>  couts=merge(sinistres,contrat)
> tail(couts)
     nocontrat    no garantie    cout exposition zone puissance agevehicule
1919   6104006 11933      1RC 5376.04       0.37    E         6           1
1920   6107355 12349      1RC   51.63       0.74    E         4           1
1921   6108364 13229      1RC 1320.00       0.74    B         9           1
1922   6109171 11567      1RC 1320.00       0.74    B        13           1
1923   6111208 14161      1RC  970.20       0.49    E        10           5
1924   6111650 14476      1RC 1940.40       0.48    E         4           0
     ageconducteur bonus marque carburant densite region
1919            32    57     12         E      93     10
1920            45    57     12         E      72     10
1921            32   100     12         E      83      0
1922            56    50     12         E      93     13
1923            30    90     12         E      53      2
1924            69    50     12         E      93     13

Here, each line is a claim. Usual families to model the cost are the Gamma distribution, or the inverse Gaussian. Or the lognormal distribution (which is not in the exponential family, but one can assume that the logarithm of the cost can be modeled with a Gaussian distribution). Consider here only one covariate, e.g. the age of the car, and two different models: a Gamma one, and a lognormal one.

> age=0:20
> reggamma.sp <- glm(cout~agevehicule,family=Gamma(link="log"),
+ data=couts)
> Pgamma <- predict(reggamma.sp,newdata=data.frame(agevehicule=age),type="response")

For the Gamma regression, it is a simple GLM, so it is not difficult. For a lognormal distribution, one should remember that the expected value of a lognormal distribution is not the exponential of the underlying Gaussian distribution. A correction should be made, here to get an unbiased estimator for the average cost,

> reglm.sp <- lm(log(cout)~agevehicule,data=baseCOUT)
> sigma <- summary(reglm.sp)$sigma
> mu <- predict(reglm.sp,newdata=data.frame(agevehicule=age))
> Pln <- exp(mu+sigma^2/2)

We can plot those two predictions on a single graph,

> plot(age,Pgamma,xlab="",ylab="",col="red",type="b",pch=4)
> lines(age,Pln,col="blue",type="b")

Here it is,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.18.56.png

Observe that it is also possible to use splines, since there might be no reason for the age to appear here in a multiplicative way,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.25.52.png

Here, the two models are rather close. Nevertheless, one should remember that the Gamma model can be extremely sensitive to large claims (I mean here really large claims). On the other hand, with the log-transformation for the lognormal model, it seams that this model is less sensitive to large events. Actually, if I use the complete dataset, the regressions are the following,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.19.44.png

i.e. with a lognormal distribution, the average cost is decreasing with the age of the car, while it is increasing with a Gamma model. The main reason here is that there is one large (not to say huge) claim in the dataset,

> couts[which.max(couts$cout),]
         cout exposition zone puissance agevehicule ageconducteur
7842  4024601       0.22    B         9          13            19
     marque carburant densite region
7842      2         E      93     24

One young driver got a $ 4 million claim, with a 13 year old car. This is an outliers for the Gamma regression, that clearly influences the estimation (the second largest if only one third of this one). Since there is a clear influence of large claims on the estimation of the average cost, a natural idea might be to remove those large claims. Or perhaps to see them as different from normal claims: normal claims can be explained by some covariates, but perhaps that those large claims should be shared not only within its own class, but within all the insured on the portfolio. To formalize this idea, observe that we can write

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s|\boldsymbol{X})}_{B}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s,%20\boldsymbol{X})%20}_{C}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s|%20\boldsymbol{X})}_{B}}}}

where the blue part is associated to normal-sized claims, while large ones correspond to the red part. It is then possible to run three regressions: one on normal sized claims, one on large claims, and one on the indicator of having a large claims, given that a claim occurred. The code here is something like that: a large claim – here – is above $ 10,000 (one has a fix it)

> s= 10000
> couts$normal=(couts$cout<=s)
> mean(couts$normal)
[1] 0.9818087

which represent 2% of the claims in our dataset.We can run 3 sets of regressions, with smoothed regression on the age of the car. The first one to model large claims individual costs,

> indice = which(couts$cout>s)
> mean(couts$cout[indice])
[1] 34471.59
> library(splines)
> regB=glm(cout~bs(agevehicule),data=couts,
+ subset=indice,family=Gamma(link="log"))
> ypB=predict(regB,newdata=data.frame(agevehicule=age),type="response")
> ypB2=mean(couts$cout[indice])

the second one to model normal claims individual costs,

> indice = which(couts$cout<=s)
> mean(couts$cout[indice])
[1] 1335.878
> regA=glm(cout~bs(agevehicule),data=couts,
+ subset=indice,family=Gamma(link="log"))
> ypA=predict(regA,newdata=data.frame(agevehicule=age),type="response")
> ypA2=mean(couts$cout[indice])

And finally, a third one, on the probability of having a normal sized claim, given that a claim occurred

> regC=glm(normal~bs(agevehicule),data=couts,family=binomial)
> ypC=predict(regC,newdata=data.frame(agevehicule=age),type="response")
> regC2=glm(normal~1,data=couts,family=binomial)
> ypC2=predict(regC2,newdata=data.frame(agevehicule=age),type="response")

Note that we to have, each time something that can be interpreted either as https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X},Y\gtrless%20%20s), or https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|Y\gtrless%20%20s) – i.e. no covariate is considered on the later. On the graph below, we did plot

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s|\boldsymbol{X})}_{B}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s,%20\boldsymbol{X})%20}_{C}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s|%20\boldsymbol{X})}_{B}}}}

where Gamma regressions – with splines – are considered for the average costs, while logistic regressions – again with splines – are considered to model probabilities.

http://freakonometrics.hypotheses.org/files/2013/02/ecret-ABC-v2.gif

(but careful with splines: on borders, since we do not have a lot of observations, the behavior can be… odd. And adjustments should be made to obtain an adequate level of premium).  If it is legitimate to assume that normal-sized claims can be explained by some covariates, perhaps large claims (or extremely large ones) are just purely random, i.e. not function of any covariate, at all. I.e.

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s|\boldsymbol{X})}_{B}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s)%20}_{C%27}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s|%20\boldsymbol{X})}_{B}}}}

http://freakonometrics.hypotheses.org/files/2013/02/ecret-AB2C-v2.gif

To go one step further, it might also be possible to assume that not only the size of the claim (given that it is a large one) is not a function of any covariate, but perhaps neither is the probability of having an extremely large claim, too

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s)}_{B%27}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s)%20}_{C%27}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s)}_{B%27}}}}

http://freakonometrics.hypotheses.org/files/2013/02/ecret-AB2C2-v2.gif

From the first part, we’ve seen that the distribution considered had an impact on the prediction, and in the second, we’ve seen that the definition of large claims (and how to deal with them) also has an impact. So clearly, actuaries have some leverage when working on ratemaking…

Exposure with binomial responses

Last week, we’ve seen how to take into account the exposure to compute nonparametric estimators of several quantities (empirical means, and empirical variances) incorporating exposure. Let us see what can be done if we want to model a binomial response. The model here is the following: ,

  • the number of claims https://latex.codecogs.com/gif.latex?N_i on the period https://latex.codecogs.com/gif.latex?[0,1] is unobserved
  • the number of claims https://latex.codecogs.com/gif.latex?Y_i on https://latex.codecogs.com/gif.latex?[0,E_i] is observed (as well as https://latex.codecogs.com/gif.latex?E_i)

that can be visualize below

http://f.hypotheses.org/wp-content/blogs.dir/253/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-09.30.00.png

Consider the case where the variable of interest is not the number of claims, but simply the indicator of the occurrence of a claim. Then we wish to model the event https://latex.codecogs.com/gif.latex?\{N=0\} versus https://latex.codecogs.com/gif.latex?\{N%3E0\}, interpreted as non-occurrence and occurrence. Given the fact that we can only observe https://latex.codecogs.com/gif.latex?\{Y=0\} versus https://latex.codecogs.com/gif.latex?\{Y%3E0\}. Having an inclusion is not enough to derive a model. Actually, with a Poisson process model, we can get easily that

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0)%20=%20\mathbb{P}(N=0)^E

With words, it means that the probability of not having a claim in the first six months of the year is the square root of not have a claim over a year. Which makes sense. Assume that the probability of not having a claim can be explained by some covariates, denoted https://latex.codecogs.com/gif.latex?\boldsymbol{X}, through some link function (using the GLM terminology),

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=0|\boldsymbol{X})=h(\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})

Now, since we do observe https://latex.codecogs.com/gif.latex?Y – and not https://latex.codecogs.com/gif.latex?N – we have

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0|\boldsymbol{X},E)=h(\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})^E

The dataset we will use is always the same

> sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt",
+ header=TRUE,sep=";")
> sinistres=sinistre[sinistre$garantie=="1RC",]
> sinistres=sinistres[sinistres$cout>0,]
> contrat=read.table("http://freakonometrics.free.fr/contractACT2040.txt",
+ header=TRUE,sep=";")
> T=table(sinistres$nocontrat)
> T1=as.numeric(names(T))
> T2=as.numeric(T)
> nombre1 = data.frame(nocontrat=T1,nbre=T2)
> I = contrat$nocontrat%in%T1
> T1= contrat$nocontrat[I==FALSE]
> nombre2 = data.frame(nocontrat=T1,nbre=0)
> nombre=rbind(nombre1,nombre2)
> sinistres = merge(contrat,nombre)
> sinistres$nonsin = (sinistres$nbre==0)

The first model we can consider is based on the standard logistic approach, i.e.

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0|\boldsymbol{X},E)=\left(\frac{\exp(\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})}{1+\exp(\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})}\right)^E

That’s nice, but difficult to handle with standard functions. Nevertheless, it is always possible to compute numerically the maximum likelihood estimator of https://latex.codecogs.com/gif.latex?\boldymbol{\beta} given https://latex.codecogs.com/gif.latex?(Y_i,\boldsymbol{X}_i,E_i).

> Y=sinistres$nonsin
> X=cbind(1,sinistres$ageconducteur)
> E=sinistres$exposition
> logL = function(beta){
+ 	pi=(exp(X%*%beta)/(1+exp(X%*%beta)))^E
+ 	-sum(log(dbinom(Y,size=1,prob=pi)))
+ }
> optim(fn=logL,par=c(-0.0001,-.001),
+ method="BFGS")
$par
[1] 2.14420560 0.01040707
$value
[1] 7604.073
$counts
function gradient 
      42       10 
$convergence
[1] 0
$message
NULL
> parametres=optim(fn=logL,par=c(-0.0001,-.001),
+ method="BFGS")$par

Now, let us look at alternatives, based on standard regression models. For instance a binomial-log model. Because the exposure appears as a power of the annual probability, everything would be fine if https://latex.codecogs.com/gif.latex?h was the exponential function (or https://latex.codecogs.com/gif.latex?h^{-1} was the log link function), since

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0|\boldsymbol{X},E)=\exp(E+\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})

Now, if we try to code it, it starts quickly to be problematic,

> reg=glm(nonsin~ageconducteur+offset(exposition),
+ data=sinistresI,family=binomial(link="log")) 
Error: no valid set of coefficients has been found: please supply starting values

I tried (almost) everything I could, but I could not get rid of that error message,

> startglm=c(0,-.001)
> names(startglm)=c("(Intercept)","ageconducteur")
> etaglm=rep(-.01,nrow(sinistresI))
> etaglm[sinistresI$nonsin==0]=-10
> muglm=exp(etaglm)
> reg=glm(nonsin~ageconducteur+offset(exposition),
+ data=sinistresI,family=binomial(link="log"),
+ control = glm.control(epsilon=1e-5,trace=TRUE,maxit=50),
+ start=startglm,
+ etastart=etaglm,mustart=muglm)
Deviance = NaN Iterations - 1 
Error: no valid set of coefficients has been found: please supply starting values

So I decided to give up. Almost. Actually, the problem comes from the fact that https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0) is closed to 1. I guess everything would be nicer if we could work with probability close to 0. Which is possible, since

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y%3E0)=1-\mathbb{P}(Y=0)%20=%201-[1-\mathbb{P}(N%3E0)]^E

where https://latex.codecogs.com/gif.latex?\mathbb{P}(N%3E0) is close to 0. So we can use Taylor’s expansion,

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y%3E0)\sim1-1+E\cdot%20\mathbb{P}(N%3E0)]=E\cdot%20\mathbb{P}(N%3E0)]

Here, the exposure does no longer appears as a power of the probability, but appears multiplicatively. Of course, there are higher order terms. But let us forget them (so far). If – one more time – we consider a log link function, then we can incorporate the exposure, or to be more specific, the logarithm of the exposure.

> regopp=glm((1-nonsin)~ageconducteur+offset(log(exposition)),
+ data=sinistresI,family=binomial(link="log"))

which now works perfectly.

Now, to see a final model, perhaps we should get back to our Poisson regression model since we do have a model for the probability that https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=\cdot).

> regpois=glm(nbre~ageconducteur+offset(log(exposition)),
+ data=sinistres,family=poisson(link="log"))

We can now compare those three models. Perhaps, we should also include the prediction without any explanatory variable. For the second model (actually, it does run without any explanatory variable), we run

>  regreff=glm((1-nonsin)~1+offset(log(exposition)),
+ data=sinistres,family=binomial(link="log"))

so that the prediction is here

> exp(coefficients(regreff))
(Intercept) 
 0.06776376

This value is comparable with the logistic regression,

> logL2 = function(beta){
+ 	pi=(exp(beta)/(1+exp(beta)))^E
+ 	-sum(log(dbinom(Y,size=1,prob=pi)))}
> param=optim(fn=logL2,par=.01,method="BFGS")$par
> 1-exp(param)/(1+exp(param))
[1] 0.06747777

But is quite different from the Poisson model,

> exp(coefficients(glm(nbre~1+offset(log(exposition)),
+ data=sinistres,family=poisson(link="log"))))
(Intercept) 
 0.07279295

Let us produce a graph, to compare those models,

> age=18:100
> yml1=exp(parametres[1]+parametres[2]*age)/(1+exp(parametres[1]+parametres[2]*age))
> plot(age,1-yml1,type="l",col="purple")
> yp=predict(regpois,newdata=data.frame(ageconducteur=age,
+ exposition=1),type="response")
> yp1=1-exp(-yp)
> ydl=predict(regopp,newdata=data.frame(ageconducteur=age,
+ exposition=1),type="response")
> plot(age,ydl,type="l",col="red")
> lines(age,yp1,type="l",col="blue")
> lines(age,1-yml1,type="l",col="purple")
> abline(h=exp(coefficients(regreff)),lty=2)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-08-a%CC%80-14.55.591.png

Observe here that the three models are quite different. Actually, with two models, it is possible to run more complex regression, e.g. with splines, to visualize the impact of the age on the probability of having – or not – a car accident. If we compare the Poisson regression (still in red) and the log-binomial model, with Taylor’s expansion, we get

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-08-a%CC%80-14.39.08.png

The next step is to see how to incorporate the exposure in a tree. But that’s another story…

Natura non facit saltus

(see John Wilkins’ article on the – interesting – history of that phrase http://scienceblogs.com/evolvingthoughts/…). We will see, this week in class, several smoothing techniques, for insurance ratemaking. As a starting point, assume that we do not want to use segmentation techniques: everyone will pay exactly the same price.

  • no segmentation of the premium

And that price should be related to the pure premium, which is proportional to the frequency (or the annualized frequency, as discussed previously), since

https://latex.codecogs.com/gif.latex?\mathbb{E}_{\mathbb{P}}\left(\sum_{i=1}^N%20Y_i\right)=\mathbb{E}_{\mathbb{P}}(N)%20\cdot%20\mathbb{E}_{\mathbb{P}}(Y_i)

The probability measure is mentioned here just to recall that we can use any measure. Even https://latex.codecogs.com/gif.latex?\mathbb{P}_{\boldsymbol{X}} (based on some covariates). Without any covariate, the expected frequency should be

> regglm0=glm(nbre~1+offset(log(exposition)),data=sinistres,family=poisson)
> summary(regglm0)

Call:
glm(formula = nbre ~ 1 + offset(log(exposition)), family = poisson, 
    data = sinistres)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.5033  -0.3719  -0.2588  -0.1376  13.2700  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept)  -2.6201     0.0228  -114.9   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 12680  on 49999  degrees of freedom
Residual deviance: 12680  on 49999  degrees of freedom
AIC: 16353

Number of Fisher Scoring iterations: 6
> exp(coefficients(regglm0))
(Intercept) 
 0.07279295

Thus, if we do not want to take into account potential heterogeneity, we should assume that https://latex.codecogs.com/gif.latex?N\sim\mathcal{P}(\lambda) where https://latex.codecogs.com/gif.latex?\lambda is closed to 7.28%. Yes, as mentioned in class, it is rather common to see https://latex.codecogs.com/gif.latex?\lambda as a percentage, i.e. a probability, since

https://latex.codecogs.com/gif.latex?\mathbb{P}(N\neq%200)=1-e^{-\lambda}\approx%20\lambda

i.e. https://latex.codecogs.com/gif.latex?\lambda can be interpreted as the probability of not have a claim (see also the law of small numbers). Let us visualize this as a function of the age of the driver,

> a=18:100
> yp=predict(regglm0,newdata=data.frame(ageconducteur=a,exposition=1),type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a,yp0,type="l",ylim=c(.03,.12))
> abline(v=40,col="grey")
> lines(a,yp1,lty=2)
> lines(a,yp2,lty=2)
> k=23
> points(a[k],yp0[k],pch=3,lwd=3,col="red")
> segments(a[k],yp1[k],a[k],yp2[k],col="red",lwd=3)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-constante.png

We do predict the same frequency for all drivers, e.g. for some drive aged 40,

> cat("Frequency =",yp0[k]," confidence interval",yp1[k],yp2[k])
Frequency = 0.07279295  confidence interval 0.07611196 0.06947393

Let us now consider the case where we try to take into account heterogeneity, e.g. by age,

  • The (standard) Poisson regression

The idea of the (log-)Poisson regression is to assume that instead of having https://latex.codecogs.com/gif.latex?N\sim\mathcal{P}(\lambda), we should have https://latex.codecogs.com/gif.latex?N|\boldsymbol{X}\sim\mathcal{P}(\lambda_{\boldsymbol{X}}), where

https://latex.codecogs.com/gif.latex?\lambda_{\boldsymbol{X}}=\exp(\beta_0+\beta_1%20\boldsymbol{X}_1+\cdots+\beta_k\boldsymbol{X}_k)

in a very general setting. Here, let us consider only one explanatory variable, i.e.

https://latex.codecogs.com/gif.latex?\lambda_{X}=\exp(\beta_0+\beta_1%20{X})

Here, we have

> yp=predict(regglm1,newdata=data.frame(ageconducteur=a,exposition=1),
+ type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a,yp0,type="l",ylim=c(.03,.12))
> abline(v=40,col="grey")
> lines(a,yp1,lty=2)
> lines(a,yp2,lty=2)
> points(a[k],yp0[k],pch=3,lwd=3,col="red")
> segments(a[k],yp1[k],a[k],yp2[k],col="red",lwd=3)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-exp-standard.png

i.e. the prediction for the annualized claim frequency for our 40 year old driver is now 7.74% (which is slightly higher than what we had before, 7.28%)

> cat("Frequency =",yp0[k]," confidence interval",yp1[k],yp2[k])
Frequency = 0.07740574  confidence interval 0.08117512 0.07363636

It is possible to compute not the expected frequency , but the ratio https://latex.codecogs.com/gif.latex?\mathbb{E}(N|X)/\mathbb{E}(N).

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-05-a%CC%80-13.45.43.png

Above the horizontal blue line, the premium will be higher than the one obtained without segmentation, and (of course) lower below. Here, drivers younger than 44 year old will pay more, while driver older than 44 year old will be less. We have discussed, in the introduction, the necessity of segmentation. If we consider two companies, one segmenting, while the other one has a flat rate, then older drivers will go to the first company (since insurance is cheaper) while younger ones will go to the second one (again, it is cheaper). The problem is that the second company implicitly hopes that older drivers will compensate the risk. But since they’re gone, insurance will be too cheap, and the company will loose money (if not goes bankrupt). So companies have to use segmentation techniques to survive. Now, the problem is that we cannot be sure that this exponential decay of the premium is the proper way the premium should evolve as a function of the age. An alternative can be to use nonparametric techniques to visualize to true influence of the age on claims frequency.

  • A pure nonparametric model

A first model can be to consider a premium, per age. This can be done considering the age of the driver as a factor in the regression,

> regglm2=glm(nbre~as.factor(ageconducteur)+offset(log(exposition)),
+ data=sinistres,family=poisson)
> yp=predict(regglm2,newdata=data.frame(ageconducteur=a0,exposition=1),
+ type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a0,yp0,type="l",ylim=c(.03,.12))
> abline(v=40,col="grey")

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-factors.png

Here, the forecast for our 40 year old driver is slightly lower than be previous one, but the confidence interval is much larger (since we focus on a very small subclass of the portfolio: drivers aged exactly 40)

Frequency = 0.06686658  confidence interval 0.08750205 0.0462311

Here, we consider too small classes, and the premium is too erratic: the premium will decrease of 20% from age 40 to 41, and then increase of 50% from age 41 to 42,

> diff(log(yp0[23:25]))
        24         25 
-0.2330241  0.5223478

There is no chance that the company will keep the insured with this strategy. This discontinuity of the premium is clearly an important issue here.

  • Using age classes

An alternative can be to consider age classes, from very young drivers to senior drivers.

> level1=seq(15,105,by=5)
> regglmc1=glm(nbre~cut(ageconducteur,level1)+offset(log(exposition)),
+ data=sinistres,family=poisson)
> summary(regglmc1)

Coefficients:
                                   Estimate Std. Error z value Pr(>|z|)    
(Intercept)                         -1.6036     0.1741  -9.212  < 2e-16 ***
cut(ageconducteur, level1)(20,25]   -0.4200     0.1948  -2.157   0.0310 *  
cut(ageconducteur, level1)(25,30]   -0.9378     0.1903  -4.927 8.33e-07 ***
cut(ageconducteur, level1)(30,35]   -1.0030     0.1869  -5.367 8.02e-08 ***
cut(ageconducteur, level1)(35,40]   -1.0779     0.1866  -5.776 7.65e-09 ***
cut(ageconducteur, level1)(40,45]   -1.0264     0.1858  -5.526 3.28e-08 ***
cut(ageconducteur, level1)(45,50]   -0.9978     0.1856  -5.377 7.58e-08 ***
cut(ageconducteur, level1)(50,55]   -1.0137     0.1855  -5.464 4.65e-08 ***
cut(ageconducteur, level1)(55,60]   -1.2036     0.1939  -6.207 5.40e-10 ***
cut(ageconducteur, level1)(60,65]   -1.1411     0.2008  -5.684 1.31e-08 ***
cut(ageconducteur, level1)(65,70]   -1.2114     0.2085  -5.811 6.22e-09 ***
cut(ageconducteur, level1)(70,75]   -1.3285     0.2210  -6.012 1.83e-09 ***
cut(ageconducteur, level1)(75,80]   -0.9814     0.2271  -4.321 1.55e-05 ***
cut(ageconducteur, level1)(80,85]   -1.4782     0.3371  -4.385 1.16e-05 ***
cut(ageconducteur, level1)(85,90]   -1.2120     0.5294  -2.289   0.0221 *  
cut(ageconducteur, level1)(90,95]   -0.9728     1.0150  -0.958   0.3379    
cut(ageconducteur, level1)(95,100] -11.4694   144.2817  -0.079   0.9366    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

> yp=predict(regglmc1,newdata=data.frame(ageconducteur=a,exposition=1),
+ type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a,yp0,ylim=c(.03,.12),type="s")
> abline(v=40,col="grey")
> lines(a,yp1,lty=2,type="s")
> lines(a,yp2,lty=2,type="s")

Here we obtain the following predictions,

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-cut-1.png

and for our 40 year old driver, the frequency is now 6.84%.

Frequency = 0.0684573  confidence interval 0.07766717 0.05924742

But our classes were defined arbitrarily here. Perhaps should we consider other classes, to see if the prediction is sensitive to the cutting values,

> level2=level1-2
> regglmc2=glm(nbre~cut(ageconducteur,level2)+offset(log(exposition)),
+ data=sinistres,family=poisson)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-cut-2.png

which yields the following values for our 40 year old driver,

Frequency = 0.07050614  confidence interval 0.07980422 0.06120807

So here, we did not remove the discontinuity problem. An idea here can be to consider moving regions: if the goal is to predict the frequency for a 40 year old driver, perhaps the class should be (somehow) centered around 40. And center the interval around 35 for drivers aged 35. Etc.

  • Moving average

Thus, it is natural to consider some local regressions, where only drivers aged almost 40 should be considered. This almost concept is related to the bandwidth. For instance, drivers between 35 and 45 can be considered as being almost40. In practice we can either consider a subset function, or we can use weights in the regressions

> value=40
> h=5
> sinistres$omega=(abs(sinistres$ageconducteur-value)<=h)*1
> regglmomega=glm(nbre~ageconducteur+offset(log(exposition)),
+ data=sinistres,family=poisson,weights=omega)

To see what’s going on, let us consider an animated plot, where the age of interest is changing,

http://freakonometrics.hypotheses.org/files/2013/02/liss-poisson-2.gif

Here, for our 40 year old drive, we get

Frequency = 0.06913391  confidence interval 0.07535564 0.06291218

We do obtain a curve that can be interpreted as a local regression. But here, we do not take into account that 35 is not as close to 40 as 39 could be. An here, 34 is assumed to be very far away from 40. Clearly, we could improve that technique: kernel functions can considered, i.e. the closer to 40, the larger the weight.

> value=40
> h=5
> sinistres$omega=dnorm(abs(sinistres$ageconducteur-value)/h)
> regglmomega=glm(nbre~ageconducteur+offset(log(exposition)),
+ data=sinistres,family=poisson,weights=omega)

which can be plotted below

http://freakonometrics.hypotheses.org/files/2013/02/liss-poisson-1.gif

Here, our prediction for our 40 year old drive is

Frequency = 0.07040464  confidence interval 0.07981521 0.06099408

This is the idea of kernel regression techniques. But as explained in the slides, other non parametric techniques can be considered, like spline functions.

  • Smoothing with splines

In R, it is simple to use spline function (somehow much more simple than kernel smoothers)

> library(splines)
> regglmbs=glm(nbre~bs(ageconducteur)+offset(log(exposition)),
+ data=sinistres,family=poisson)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-splines.png

The prediction for our 40 year old driver is now

Frequency = 0.06928169  confidence interval 0.07397124 0.06459215

Note that this techniques is related to another class of models, the so-called Generalized Additive Models, i.e. GAMs.

> library(mgcv)
> reggam=gam(nbre~s(ageconducteur)+offset(log(exposition)),
+ data=sinistres,family=poisson)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-gam.png

The prediction is extremely close to the one we obtained above (the main differences being observed for very old drivers)

Frequency = 0.06912683  confidence interval 0.07501663 0.06323702
  • Comparison of the different models

Somehow, one way or another, all those models are valid. So perhaps we should compare them,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-05-a%CC%80-14.50.19.png

On the graph above, we can visualize the upper and the lower bound of the prediction, for the 9 models. The horizontal line is the predicted value without taking into account heterogeneity. It is possible to consider relative values, with respect to this value,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-05-a%CC%80-14.54.56.png

Fréquence de sinistres, et surdispersion

Je continue à mettre en ligne les transparents qui serviront de support pour le cours ACT2040. Dans cette dernière partie sur la modélisation de la fréquence de sinistre, on parlera de surdispersion. Les transparents sont en ligne ici,

Sinon, parmi les références complémentaires, je peux suggérer plusieurs documents rédigés par des praticiens, comme Meyers (2009) http://casact.org/education/…, Isamail & Jemain (2009) http://casact.org/pubs/… ou encore le document très intéressant (et critique) de Schmid (2011) http://casact.org/education/…. Les plus motivés pourront aussi survoler les section 2.3 et 2.4. du livre Denuit et al. (2007), en ligne sur http://books.google.ca/…

Overdispersion with different exposures

In actuarial science, and insurance ratemaking, taking into account the exposure can be a nightmare (in datasets, some clients have been here for a few years – we call that exposure – while others have been here for a few months, or weeks). Somehow, simple results because more complicated to compute just because we have to take into account the fact that exposure is an heterogeneous variable.

The exposure in insurance ratemaking can be seen as a problem of censored data (in my dataset, the exposure is always smaller than 1 since observations are contracts, not policyholders),

  • the number of claims https://latex.codecogs.com/gif.latex?N_i on the period https://latex.codecogs.com/gif.latex?[0,1] is unobserved
  • the number of claims https://latex.codecogs.com/gif.latex?Y_i on https://latex.codecogs.com/gif.latex?[0,E_i] is observed (as well as https://latex.codecogs.com/gif.latex?E_i)

And as always, the variable of interest is the unobserved one, because we have to price insurance contract with a cover period of one (full) year. So we have to model the yearly frequency of insurance claims.

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-09.30.00.png

In our dataset, we have https://latex.codecogs.com/gif.latex?(Y_i,E_i)‘s – or more generally also some additional covariates https://latex.codecogs.com/gif.latex?(Y_i,E_i,\boldsymbol{X}_i)‘s. For ratemaking, we need to estimate https://latex.codecogs.com/gif.latex?\mathbb{E}(N\vert\boldsymbol{X}=\boldsymbol{x}) and perhaps also https://latex.codecogs.com/gif.latex?\text{Var}(N|\boldsymbol{X}=\boldsymbol{x}) (for instance to test if the Poisson assumption is valid, or not). To estimate the expected value, a natural estimate for https://latex.codecogs.com/gif.latex?\mathbb{E}(N) (forget about covariates as a start) is
https://latex.codecogs.com/gif.latex?m_N=\frac{\sum_{i=1}^n%20Y_i}{\sum_{i=1}^n%20E_i}
which is also the weight average of annualized individual counts
https://latex.codecogs.com/gif.latex?m_N=\sum_{i=1}^n%20\frac{%20E_i}{\sum_{i=1}^n%20E_i}%20\cdot%20\frac{Y_i}{E_i}
We consider the ratio of the total number of claims to the total exposure-to-
risk. This estimate appears for instance if we consider a Poisson process, so that https://latex.codecogs.com/gif.latex?N\sim\mathcal{P}(\lambda) while https://latex.codecogs.com/gif.latex?Y\sim\mathcal{P}(\lambda%20\cdot%20E). Then, the likelihood is

https://latex.codecogs.com/gif.latex?\mathcal{L}(\lambda,\boldsymbol{Y},\boldsymbol{E})=\prod_{i=1}^n%20\frac{e^{-\lambda%20E_i}%20[\lambda%20E_i]^{Y_i}}{Y_i!}

i.e.

https://latex.codecogs.com/gif.latex?\log%20\mathcal{L}(\lambda,\boldsymbol{Y},\boldsymbol{E})%20=%20-\lambda%20\sum_{i=1}^n%20E_i%20+\sum_{i=1}^n%20Y_i%20\log[\lambda%20E_i]%20-%20\log\left(\prod_{i=1}^n%20Y_i!\right)

The first order condition is here

https://latex.codecogs.com/gif.latex?\frac{\partial}{\partial%20\lambda}\log%20\mathcal{L}(\lambda,\boldsymbol{Y},\boldsymbol{E})%20=%20%20-%20\sum_{i=1}^n%20E_i%20+\frac{1}{\lambda}\sum_{i=1}^n%20Y_i%20=0

which is satisfied if

https://latex.codecogs.com/gif.latex?\widehat{\lambda}=\frac{\sum_{i=1}^n%20Y_i}{\sum_{i=1}^n%20E_i}

So, we do have an estimator for the expected value, and a natural estimator for https://latex.codecogs.com/gif.latex?\mathbb{E}(N\vert\boldsymbol{X}=\boldsymbol{x}) is then (if we consider categorical covariates)
https://latex.codecogs.com/gif.latex?m_{N|\boldsymbol{x}}%20=\frac{\sum_{i,\boldsymbol{X}_i=\boldsymbol{x}}%20Y_i}{\sum_%20{i,\boldsymbol{X}_i=\boldsymbol{x}}%20E_i}

Now, we need an estimate for the variance, or more precisely the conditional variable. Assume (as a starting point) that all have the same exposure https://latex.codecogs.com/gif.latex?E. For instance, if https://latex.codecogs.com/gif.latex?E is one half, insured were observed only the first six months. Then https://latex.codecogs.com/gif.latex?N=Y+Y%27 with https://latex.codecogs.com/gif.latex?Y\overset{\mathcal%20L}{=}Y%27 (https://latex.codecogs.com/gif.latex?Y is the number of claims on the first six months, while https://latex.codecogs.com/gif.latex?Y%27 are the number of claims on the last six months), i.e. https://latex.codecogs.com/gif.latex?\text{Var}(N)=\text{Var}(Y)+%20\text{Var}(Y%27) if we assume independent increments. I.e.
https://latex.codecogs.com/gif.latex?\text{Var}(N)=2\text{Var}(Y), or conversely https://latex.codecogs.com/gif.latex?E%20\cdot\text{Var}(N)=\text{Var}(Y). More generally, it is reasonable to assume that

https://latex.codecogs.com/gif.latex?\text{Var}(Y)=E\cdot%20\text{Var}(N)
for all values of https://latex.codecogs.com/gif.latex?E. And then
https://latex.codecogs.com/gif.latex?\text{Var}\left(\frac{Y}{E}\right)=\frac{1}{E}\cdot%20\text{Var}(N)
Thus, it seems legitimate to assume that the empirical variance of https://latex.codecogs.com/gif.latex?N can be written
https://latex.codecogs.com/gif.latex?S_N^2=E\cdot%20S_{Y/E}^2
Since the average of https://latex.codecogs.com/gif.latex?Y_i/E is https://latex.codecogs.com/gif.latex?\overline{N}=m_N, then
https://latex.codecogs.com/gif.latex?S_N^2=E\cdot%20\frac{1}{n}\sum_{i=1}^n%20\left[\frac{Y_i}{E}-\overline{N}\right]^2}%20=%20\frac{1}{n}\sum_{i=1}^n%20E\left[\frac{Y_i}{E}-\overline{N}\right]^2}
or equivalently
https://latex.codecogs.com/gif.latex?S_N^2=\frac{1}{n}\sum_{i=1}^n%20\frac{E}{E^2}\left[Y_i-\overline{N}\cdot%20E\right]^2}%20=\frac{1}{n}\sum_{i=1}^n%20\frac{1}{E}[Y_i-\overline{N}\cdot%20E]^2i.e.
https://latex.codecogs.com/gif.latex?S_N^2=\frac{\sum_{i=1}^n%20[Y_i-\overline{N}\cdot%20E]^2%20}{nE}
Thus, with different https://latex.codecogs.com/gif.latex?E_i‘s, it would be legitimate (I guess) to consider
https://latex.codecogs.com/gif.latex?S_N^2=\frac{\sum_{i=1}^n%20[Y_i-\overline{N}\cdot%20E_i]^2%20}{\sum_{i=1}^n%20E_i}
Thus, an estimator for https://latex.codecogs.com/gif.latex?\text{Var}(N|\boldsymbol{X}=\boldsymbol{x}) is
https://latex.codecogs.com/gif.latex?S_{N|\boldsymbol{x}}^2=\frac{\sum_{i,\boldsymbol{X}_i=\boldsymbol{x}}%20[Y_i-\overline{N}\cdot%20E_i]^2}{\sum_{i,\boldsymbol{X}_i=\boldsymbol{x}%20}%20E_i}

This can be used to test is the Poisson assumption is valid to model frequency. Consider the following dataset,

>  sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt",
+  header=TRUE,sep=";")
>  sinistres=sinistre[sinistre$garantie=="1RC",]
>  sinistres=sinistres[sinistres$cout>0,]
>  contrat=read.table("http://freakonometrics.free.fr/contractACT2040.txt",
+  header=TRUE,sep=";")
>  T=table(sinistres$nocontrat)
>  T1=as.numeric(names(T))
>  T2=as.numeric(T)
>  nombre1 = data.frame(nocontrat=T1,nbre=T2)
>  I = contrat$nocontrat%in%T1
>  T1= contrat$nocontrat[I==FALSE]
>  nombre2 = data.frame(nocontrat=T1,nbre=0)
>  nombre=rbind(nombre1,nombre2)
>  baseFREQ = merge(contrat,nombre)

Here, we do have our two variables of interest, the exposure, per contract,

>  E <- baseFREQ$exposition

and the (observed) number of claims (during that time frame)

>  Y <- baseFREQ$nbre

It is possible to compute without covariates, the average (yearly) number of claims, per contract, and the associated variance

> (mean=weighted.mean(Y/E,E))
[1] 0.07279295
> (variance=sum((Y-mean*E)^2)/sum(E)) 
[1] 0.08778567

It looks like the variance is (slightly) larger than the average (we’ll see in a few weeks how to test it, more formally). It is possible to add covariates, for instance the density of population, in the area where the policyholder lives,

>  X=as.factor(baseFREQ$densite)
>  for(i in 1:length(levels(X))){
+ 	   Ei=E[X==levels(X)[i]]
+ 	   Yi=Y[X==levels(X)[i]]
+  (meani=weighted.mean(Yi/Ei,Ei))    # moyenne 
+  (variancei=sum((Yi-meani*Ei)^2)/sum(Ei))    # variance
+ cat("Density, zone",levels(X)[i],"average =",meani," variance =",variancei,"\n")
+ }
Density, zone 11 average = 0.07962411  variance = 0.08711477 
Density, zone 21 average = 0.05294927  variance = 0.07378567 
Density, zone 22 average = 0.09330982  variance = 0.09582698 
Density, zone 23 average = 0.06918033  variance = 0.07641805 
Density, zone 24 average = 0.06004009  variance = 0.06293811 
Density, zone 25 average = 0.06577788  variance = 0.06726093 
Density, zone 26 average = 0.0688496   variance = 0.07126078 
Density, zone 31 average = 0.07725273  variance = 0.09067 
Density, zone 41 average = 0.03649222  variance = 0.03914317 
Density, zone 42 average = 0.08333333  variance = 0.1004027 
Density, zone 43 average = 0.07304602  variance = 0.07209618 
Density, zone 52 average = 0.06893741  variance = 0.07178091 
Density, zone 53 average = 0.07725661  variance = 0.07811935 
Density, zone 54 average = 0.07816105  variance = 0.08947993 
Density, zone 72 average = 0.08579731  variance = 0.09693305 
Density, zone 73 average = 0.04943033  variance = 0.04835521 
Density, zone 74 average = 0.1188611   variance = 0.1221675 
Density, zone 82 average = 0.09345635  variance = 0.09917425 
Density, zone 83 average = 0.04299708  variance = 0.05259835 
Density, zone 91 average = 0.07468126  variance = 0.3045718 
Density, zone 93 average = 0.08197912  variance = 0.09350102 
Density, zone 94 average = 0.03140971  variance = 0.04672329

Perhaps graphs would be a nice tool to play with, to visualize that information

> plot(meani,variancei,cex=sqrt(Ei),col="grey",pch=19,
+ xlab="Empirical average",ylab="Empirical variance")
> points(meani,variancei,cex=sqrt(Ei))

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.51.26.png

The size of the circles is related to the size of the group (the area is proportional to the total exposure within the group). The first diagonal corresponds to the Poisson model, i.e. the variance should be equal to the mean. It is also possible to consider other covariates, like the gas type

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.52.02.png

or the car brand,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.50.49.png

It is also possible to consider the age of the driver as a categorical variate

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.51.40.png

Actually, the age is interesting: we can observe on that dataset a feature that Jean-Philippe Boucher observed also on his own datasets. Let us look more carefully where are the different ages,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.55.17.png

On the right, we can observe young (unexperienced) drivers. That was expected. But some classes are below the first diagonal: the expected frequency is large, but not the variance. I.e. we know for sure that young drivers have more car accidents. It is not an heterogeneous class, on the contrary: young drivers can be seen as a relatively homogeneous class, with a high frequency of car accidents.

With the original dataset (here, I use only a subset with 50,000 clients), we do obtain the following graph:

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-11.27.04.png

If we do not observe underdispersion for young drivers, observe that those are incredibly homogeneous classes. With a clear impact of experience, since circles are moving downward from age 18 to 25.

Another disturbing story (this was – one more time – suggestion from Jean-Philippe) that it might be possible to consider the exposure as a standard variable, and see if the coefficient is actually equal to 1. Without any covariate,

>  reg=glm(Y~log(E),family=poisson("log"))
>  summary(reg)

Call:
glm(formula = Y ~ log(E), family = poisson("log"))

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.3988  -0.3388  -0.2786  -0.1981  12.9036  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -2.83045    0.02822 -100.31   <2e-16 ***
log(E)       0.53950    0.02905   18.57   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 12931  on 49999  degrees of freedom
Residual deviance: 12475  on 49998  degrees of freedom
AIC: 16150

Number of Fisher Scoring iterations: 6

i.e. the parameter is clearly strictly smaller than 1. And it is neither related to significance,

> library(car)
> linearHypothesis(reg,"log(E)",1)
Linear hypothesis test

Hypothesis:
log(E) = 1

Model 1: restricted model
Model 2: Y ~ log(E)

  Res.Df Df  Chisq Pr(>Chisq)    
1  49999                         
2  49998  1 251.19  < 2.2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

nor to the fact that I did not take into account covariates,

> reg=glm(nbre~log(exposition)+carburant+as.factor(ageconducteur)+as.factor(densite),family=poisson("log"),data=baseFREQ)
>  summary(reg)

Call:
glm(formula = nbre ~ log(exposition) + carburant + as.factor(ageconducteur) + 
    as.factor(densite), family = poisson("log"), data = baseFREQ)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.7114  -0.3200  -0.2637  -0.1896  12.7104  

Coefficients:
                              Estimate Std. Error z value Pr(>|z|)    
(Intercept)                  -14.07321  181.04892  -0.078 0.938042    
log(exposition)                0.56781    0.03029  18.744  < 2e-16 ***
carburantE                    -0.17979    0.04630  -3.883 0.000103 ***
as.factor(ageconducteur)19    12.18354  181.04915   0.067 0.946348    
as.factor(ageconducteur)20    12.48752  181.04902   0.069 0.945011

(etc). So it might be a too strong assumption to assume that the exposure is an exogenous variate here. But that’s another story !

Base de données de tarification

Pour compléter le cours de ce matin, un mot rapide sur les bases, et plus particulièrement la base de contrats. Au sujet des variables,

  • densite est la densité de population dans la commune où habite le conducteur principal,
  • zone : zone A B C D E ou F, selon la densité en nombre d’habitants par km2 de la commune de résidence (A =”1-50″, B=”50-100″, C=”100-500″, D=”500-2,000″, E=”2,000-10,000″, F=”10,000+”.

A titre d’information, la répartition de la population en France se fait de la manière suivante

  • marque : marque du véhicule selon la table suivante (1 Renault Nissan; 2 Peugeot Citroën ; 3 Volkswagen Audi Skoda Seat ; 4 Opel GM; 5 Ford ; 6 Fiat ; 10 Mercedes Chrysler ; 11 BMW Mini ;12 Autres japonaises et coréennes ; 13 Autres européennes ; 14 Autres marques et marques inconnues). Cette variable n’est pas une variable numérique
  • region : code à 2 chiffres (ce qui n’est pas une valeur numérique) donnant les 22 régions françaises (code INSEE), soit géographiquement

 

  • ageconducteur : âge du conducteur principal en début de la couverture,
  • agevehicule : âge du véhicule en début de période.

Je demande de ne pas utiliser la variable de bonus, qui fait intervenir une information utilisée en tarification a posteriori (qui ne fait pas l’objet de ce cours).

Introduction aux modèles linéaires généralisés

J’ai un peu d’avance dans le cours. Je vais mettre en ligne les transparents pour la semaine prochaine (normalement), où nous aborderons la classe des modèles linéaires généralisés. Les transparents sont en ligne ici.

Je n’ai pas mis de section sur lesGeneralized Additive Models, on se contentera de la section sur le lissage évoquée à la fin des transparents sur la modélisation de la fréquence. Afin de légitimer les méthodes de lissage (sur l’âge de l’assuré en particulier), je renvoie vers un graphique produit il y a plusieurs années par un cabinet de conseil, qui notait que la forme de la fonction de lissage, liant l’âge à la fréquence est identique, dans tous les pays,

http://freakonometrics.hypotheses.org/files/2013/02/assurance4.jpgMais je pense que je ferais un billet dédié au lissage, dans la problématique de la tarification en assurance IARD.

Regression tree using Gini’s index

In order to illustrate the construction of regression tree (using the CART methodology), consider the following simulated dataset,

> set.seed(1)
> n=200
> X1=runif(n)
> X2=runif(n)
> P=.8*(X1<.3)*(X2<.5)+
+   .2*(X1<.3)*(X2>.5)+
+   .8*(X1>.3)*(X1<.85)*(X2<.3)+
+   .2*(X1>.3)*(X1<.85)*(X2>.3)+
+   .8*(X1>.85)*(X2<.7)+
+   .2*(X1>.85)*(X2>.7) 
> Y=rbinom(n,size=1,P)  
> B=data.frame(Y,X1,X2)

with one dichotomos varible (the variable of interest, ), and two continuous ones (the explanatory ones  and ).

> tail(B)
    Y        X1        X2
195 0 0.2832325 0.1548510
196 0 0.5905732 0.3483021
197 0 0.1103606 0.6598210
198 0 0.8405070 0.3117724
199 0 0.3179637 0.3515734
200 1 0.7828513 0.1478457

The theoretical partition is the following

Here, the sample can be plotted below (be careful, the first variate is on the y-axis above, and the x-axis below) with blue dots when  equals one, and red dots when  is null,

> plot(X1,X2,col="white")
> points(X1[Y=="1"],X2[Y=="1"],col="blue",pch=19)
> points(X1[Y=="0"],X2[Y=="0"],col="red",pch=19)

In order to construct the tree, we need a partition critera. The most standard one is probably Gini’s index, which can be writen, when ‘s are splited in two classes, denoted here 

L'image “https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-04.png” ne peut être affichée car elle contient des erreurs.

or when ‘s are splited in three classes, denoted 
https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-07.png

etc. Here,  are just counts of observations that belong to partition  such that  takes value . But it is possible to consider other criteria, such as the chi-square distance,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-01.png

where, classically

https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-02.png
when we consider two classes (one knot) or, in the case of three classes (two knots)
https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-05.png

Here again, the idea is to maximize that distance: the idea is to discriminate, so we want samples as not independent as possible. To compute Gini’s index consider

> GINI=function(y,i){
+ T=table(y,i)
+ nx=apply(T,2,sum)
+ pxy=T/matrix(rep(nx,each=2),2,ncol(T))
+ vxy=pxy*(1-pxy)
+ zx=apply(vxy,2,sum)
+ n=sum(T)
+ -sum(nx/n*zx)
+ }

We simply construct the contingency table, and then, compute the quantity given above. Assume, first, that there is only one explanatory variable. We split the sample in two, with all possible spliting values , i.e.

Then, we compute Gini’s index, for all those values. The knot is the value that maximizes Gini’s index. Once we have our first knot, we keep it (call it, from now on ). And we reiterate, by seeking the best second choice: given one knot, consider the value that splits the sample in three, and give the highest Gini’s index, Thus, we consider either the following partition

or this one

I.e. we cut either below, or above the previous knot. And we iterate. The code can be something like that,

> X=X2
> u=(sort(X)[2:n]+sort(X)[1:(n-1)])/2
> knot=NULL
> for(s in 1:4){
+ vgini=rep(NA,length(u))
+ for(i in 1:length(u)){
+ kn=c(knot,u[i])
+ F=function(x){sum(x<=kn)}
+ I=Vectorize(F)(X)
+ vgini[i]=GINI(Y,I)
+ }
+ plot(u,vgini)
+ k=which.max(vgini)
+ cat("knot",k,u[k],"\n")
+ knot=c(knot,u[k])
+ u=u[-k]
+ }
knot 69 0.3025479 
knot 133 0.5846202 
knot 72 0.3148172 
knot 111 0.4811517

At the first step, the value of Gini’s index was the following,

which was maximal around 0.3. Then, this value is considered as fixed. And we try to construct a partition in three parts (spliting either below or above 0.3). We get the following plot for Gini’s index (as a function of this second knot)

 which is maximum when the split the sample around 0.6 (which becomes our second knot). Etc. Now, let us compare our code with the standard R function,

> tree(Y~X2,method="gini")
node), split, n, deviance, yval
      * denotes terminal node

 1) root 200 49.8800 0.4750  
   2) X2 < 0.302548 69 12.8100 0.7536 *
   3) X2 > 0.302548 131 28.8900 0.3282  
     6) X2 < 0.58462 65 16.1500 0.4615  
      12) X2 < 0.324591 7  0.8571 0.1429 *
      13) X2 > 0.324591 58 14.5000 0.5000 *
     7) X2 > 0.58462 66 10.4400 0.1970 *

We do obtain similar knots: the first one is 0.302 and the second one 0.584. So, constructing tree is not that difficult…

Now, what if we consider our two explanatory variables? The story remains the same, except that the partition is now a bit more complex to write. To find the first knot, we consider all values on the two components, and again, keep the one that maximizes Gini’s index,

> n=nrow(B)
> u1=(sort(X1)[2:n]+sort(X1)[1:(n-1)])/2
> u2=(sort(X2)[2:n]+sort(X2)[1:(n-1)])/2
> gini=matrix(NA,nrow(B)-1,2)
> for(i in 1:length(u1)){
+ I=(X1<u1[i])
+ gini[i,1]=GINI(Y,I)
+ I=(X2<u2[i])
+ gini[i,2]=GINI(Y,I)
+ }
> mg=max(gini)
> i=1+sum(mg==max(gini[,2]))
> par(mfrow = c(1, 2))
> plot(u1,gini[,1],ylim=range(gini),col="green",type="b",xlab="X1",ylab="Gini index")
> abline(h=mg,lty=2,col="red")
> if(i==1){points(u1[which.max(gini[,1])],mg,pch=19,col="red")
+          segments(u1[which.max(gini[,1])],mg,u1[which.max(gini[,1])],-100000)}
> plot(u2,gini[,2],ylim=range(gini),col="green",type="b",xlab="X2",ylab="Gini index")
> abline(h=mg,lty=2,col="red")
> if(i==2){points(u2[which.max(gini[,2])],mg,pch=19,col="red")
+          segments(u2[which.max(gini[,2])],mg,u2[which.max(gini[,2])],-100000)}
> u2[which.max(gini[,2])]
[1] 0.3025479

The graphs are the following: either we split on the first component (and we obtain the partition on the right, below),

or we split on the second one (and we get the following partition),

Here, it is optimal to split on the second variate, first. And actually, we get back to the one-dimensional case discussed previously: as expected, it is optimal to split around 0.3. This is confirmed with the code below,

> library(tree)
> arbre=tree(Y~X1+X2,data=B,method="gini")
> arbre$frame[1:4,]
     var   n       dev      yval splits.cutleft splits.cutright
1     X2 200 49.875000 0.4750000      <0.302548       >0.302548
2     X1  69 12.811594 0.7536232      <0.800113       >0.800113
4 <leaf>  57  8.877193 0.8070175                               
5 <leaf>  12  3.000000 0.5000000

For the second knot, four cases should be considered: spliting on the second variable (again), either above, or below the previous knot (see below on the left) or spliting on the first one. Then whe have wither a partition below or above the previous knot (see below on the right),

Etc. To visualize the tree, the code is the following

> plot(arbre)
> text(arbre)
> partition.tree(arbre)

http://freakonometrics.hypotheses.org/files/2013/01/arbre-gini-x1-x2-encore.png

Note that we can also visualize the partition. Nice, isn’t it?

To go further, the book Classification and Regression Trees by Leo Breiman (and co-authors) is awesome. Note that there are also interesting sections in the bible Elements of Statistical Learning: Data Mining, Inference, and Prediction by Trevor Hastie, Robert Tibshirani and Jerome Friedman (which can be downloaded from http://www.stanford.edu/~hastie/…)

Régression de Poisson, et biais minimal

Lors du prochain cours d’actuariat, nous allons finir les arbres de régression, et introduire la régression de Poisson. Les transparents sont en ligne ici,

Je vais présenter la régression en Poisson, en faisant un parallèle avec la régression logistique, la session suivante portera sur la généralisation obtenue avec les modèles linéaires généralisés. Sur la régression de Poisson, je suggère de lire Frees (2010) chapitre 12 (p 343-361), Greene (2012), section 18.3 (p 802-828) ou encore de Jong Heller (2008) chapitre 6. Sur les méthodes de biais minimal, de Jong Heller (2008), section 1.3 et l’article de Sholom Feldblum, http://www.casact.org/…. Sur le passage de ces dernières méthodes (introduites par Robert Bailey dans les années 60, http://www.casact.org/… et http://www.casact.org/…), je recommande la lecture de l’article de Ben Zehnwirth, Ratemaking From Bailey and Simon (1960) to Generalized Linear Regression Models, en ligne sur http://www.casact.org/…

Comme annoncé au premier cours, j’essaye de mettre en ligne les transparents au fur et à mesure, mais j’avais pris l’habitude d’écrire au tableau ces dernières années. Il faut donc que je tape tout. Pour le devoir un courriel sera envoyé d’ici la fin de semaine à tous les groupes qui se sont inscrits.

 

Régression logistique et arbres

Pour le cours de mercredi prochain, la base utilisée sera une base tirée du livre de Jed Frees, http://instruction.bus.wisc.edu/jfrees/…

> baseavocat=read.table("http://freakonometrics.free.fr/AutoBI.csv",header=TRUE,sep=",")
> tail(baseavocat)
     CASENUM ATTORNEY CLMSEX MARITAL CLMINSUR SEATBELT CLMAGE  LOSS
1335   34204        2      2       2        2        1     26 0.161
1336   34210        2      1       2        2        1     NA 0.576
1337   34220        1      2       1        2        1     46 3.705
1338   34223        2      2       1        2        1     39 0.099
1339   34245        1      2       2        1        1     18 3.277
1340   34253        2      2       2        2        1     30 0.688

On dispose d’une variable dichotomique indiquant si un assuré – suite à un accident de la route – a été représenté par un avocat (1 si oui, 2 si non). On connaît le sexe de l’assuré (1 pour les hommes et 2 pour les femmes), le statut marital (1 s’il est marié, 2 s’il est célibataire, 3 pour un veuf, et 4 pour un assuré divorcé). On sait aussi si l’assuré portait ou non une ceinture de sécurité lorsque l’accident s’est produit (1 si oui, 2 si non et 3 si l’information n’est pas connue). Enfin, une information pour savoir si le conducteur du véhicule était ou non assuré (1 si oui, 2 si non et 3 si l’information n’est pas connue). On va recoder un peu les données afin de les rendre plus claires à lire.

Les transparents du cours sont en ligne ici,

Sur les arbres de régression, je mettrais en ligne un billet, afin d’illustrer la méthode. En attendant des compléments théoriques peuvent se trouver en ligne http://genome.jouy.inra.fr/…, http://ensmp.fr/…, ou http://ujf-grenoble.fr/… (pour information, nous ne verrons que la méthode CART). Je peux renvoyer au livre (et au blog) de Stéphane Tuffery, ou (en anglais) au livre de Richard Berk, dont un résumé se trouve en ligne sur http://crim.upenn.edu/….

La semaine suivante, nous aborderons la régression de Poisson, les méthodes de biais minimal, et introduirons les modèles linéaires généralisés. Je renvoie au chapitre sur la tarification a priori du Denuit & Charpentier (2005), aux chapitres 12 et 13 de Frees (2010) ou encore les chapitres 5 et 6 du De Jong  & Heller (2008). Pour les plus curieux qui veulent comprendre les liens entre les modèles linéaires généralisés et la tarification par crédibilité, je renvoie à l’article de Klinker (2010)

Segmentation en tarification, compléments

Dans le premier cours d’actuariat IARD, nous avons vu l’importance de la ségmentation, et son implication sur le calcul des primes (passer d’une espérance mathématique à une espérance conditionnelle). Pour aller un peu plus loin, quelques compléments,

pour une lecture plus économique de la problématique de la segmentation en assurance

ou pour une lecture plus légale

Sinon, plusieurs articles de vulgarisation peuvent être lu sur internet,

La première démo aura lieu lundi, en salle informatique. Karim sera une introduction au langage R, à la manipulation des variables (qualitatives et quantitatives). Je mettrais en ligne les transparents en fin de semaine, et les codes seront mis en ligne dans le courant de la semaine prochaine.

Comme annoncé hier, il n’y aura pas cours mercredi prochain. Le mercredi suivant, nous verrons la modélisation des variables indicatrices, i.e. la régression logistique, et les arbres de régression. On supposera que le modèle linéaire aura été vu, je mets un lien vers les transparents du cours ACT6420 de la session passée, notes de cours transparents1 et transparents2. Il est aussi possible de relire Frees (2010), chapitres 3, 4, 5 et 6.

Pour commencer à pratiquer la régression logistique, on utilisera la petite fonction suivante

logit = function(formula, lien="logit", data=NULL) {
glm(formula,family=binomial(link=lien),data)
}

Sinon, la Casualty Actuarial Society a mis en ligne plusieurs documents en ligne sur les arbres de régression (qui sont peu abordés dans les livres mentionnés auparavant),

pour une comparaison de toutes les méthodes

Les transparents seront mis en ligne en fin de semaine prochaine. A suivre donc…

Une région géographique n’est pas une variable continue

En relisant les devoirs maisons, je me suis rendu compte que certains avaient tenté de regrouper les régions (géographiques) par régions homogènes. Sauf que les régions étaient codées par un numéro (selon la codification officielle). Par exemple, dans une des bases, nous avions des assurés dans 4 zones géographiques, à savoir la région 82 (région Rhône-Alpes en rouge) la région 54 (région Poitou-Charentes en vert) la région 73 (région Midi-Pyrénées en bleu) et enfin la région 41 (région Lorraine en mauve).

> unique(baseFREQ$region) 
[1] 82 54 73 41

Une idée intéressante pour regrouper les régions pouvait être d’utiliser les arbres. Les régions étant des couleurs (on le voit bien sur la carte) et pas des variables quantitatives, il est normal de travailler sur des facteurs. D’ailleurs le code pour faire la carte est le suivant,

> library(maps) 
>  france<-map(database="france") 
>  dpt=c("Ain","Ardeche","Drome","Isere","Loire ","Rhone",  
+ "Savoie","Haute-Savoie","Charente","Charente-Maritime", 
+ "Deux-Sevres","Vienne","Ariege","Aveyron","Haute-Garonne",  
+ "Gers","Lot","Hautes-Pyrenees","Tarn","Tarn-et-Garonne", 
+ "Meurthe-et-Moselle","Meuse","Moselle","Vosges") 
>  couleur=c(rep(2,8),rep(3,4),rep(4,8),rep(6,4))  
>  match=match.map(france,dpt) 
>  color=couleur[match] 
>  map(database="france", fill=TRUE, col=color)

L’arbre sur les régions en tant que facteurs donne le découpage suivant

>  baseFREQ$fregion=as.factor(baseFREQ$region) 
>  ARBRE1=tree(nombre~fregion,data=baseFREQ,split="gini")  
>  plot(ARBRE1) 
>  text(ARBRE1)

Bon, R a la mauvaise idée de recoder les classes (mais il garde l’ordre, i.e. a correspond à la région 41, b à 54, c à 73 et d à 82). Visuellement, on retient qu’il est possible de considérer deux grandes régions, AC (i.e. 41 et 73) et BD (i.e. 54 et 72). L’intérêt des arbres sur des variables qualitatives, des facteurs, c’est que tous les regroupements sont possibles. En revanche, si on fait un arbre sur la région qui est lue en tant que nombre (quantitatif), on obtient

>  ARBRE2=tree(nombre~region,data=baseFREQ,split="gini") 
>  plot(ARBRE2) 
>  text(ARBRE2)

Il est alors impossible de regrouper dans une même classe deux régions séparées par un nombre, i.e. on ne peut regrouper 41 et 82 dans la même classe. R suggère de distinguer peut être trois régions, à savoir 82 (à droite), puis 73 (au centre) et enfin de mettre éventuellement 41 et 54 ensemble. Ce qui n’est pas la stratégie optimale quand on regroupe des facteurs.

Sélection de variables versus sélection de modalités

En cours, nous avions évoqué (très rapidement) la sélection automatique de variables. La méthode la plus simple est une méthode stepwise, basé sur un critère de type AIC, ou BIC. Considérons la base suivante,

>  N = base$nbre
>  E = base$exposition
>  X1 = base$carburant
>  X2 = cut(base$agevehicule,c(0,3,10,101),
+ right=FALSE)
>  X3 = cut(base$ageconducteur,c(0,22,45,101),
+ right=FALSE)
>  X4 = as.factor(base$zone)
>  X5 = as.factor(base$puissance)
>  X6 = as.factor(base$region)
>  X7 = as.factor(base$marque)
>  base1=data.frame(N,E,X1,X2,X3,X4,X5,X6,X7)

Une méthode stepwise (backward) donne ici

> reg1=glm(N~X1+X2+X3+X4+X5+X6+X7+offset(log(E)),
+ family="poisson",data=base1)
> step(reg1)
Start:  AIC=20492.67
N ~ X1 + X2 + X3 + X4 + X5 + X6 + X7 + offset(log(E))

Df Deviance   AIC
- X5   11    15316 20482
- X3    2    15305 20490
<none>       15304 20493
- X2    2    15314 20499
- X1    1    15319 20506
- X7   10    15343 20511
- X4    5    15398 20576
- X6   14    15569 20729

Step:  AIC=20482.35
N ~ X1 + X2 + X3 + X4 + X6 + X7 + offset(log(E))

Df Deviance   AIC
- X3    2    15317 20479
<none>       15316 20482
- X2    2    15326 20488
- X1    1    15334 20498
- X7   10    15359 20505
- X4    5    15410 20566
- X6   14    15579 20717

Step:  AIC=20479.33
N ~ X1 + X2 + X4 + X6 + X7 + offset(log(E))

Df Deviance   AIC
<none>       15317 20479
- X2    2    15327 20485
- X1    1    15334 20495
- X7   10    15360 20502
- X4    5    15410 20563
- X6   14    15620 20754

Call:  glm(formula = N ~ X1 + X2 + X4 + X6 + X7 
       + offset(log(E)),
       family = "poisson",
data = base1)

Coefficients:
(Intercept)          X1E     X2[3,10)   X2[10,101)          X4B
-1.0588454   -0.1653822    0.0266763   -0.1135451   -0.0004047
X4C          X4D          X4E          X4F          X60
0.1497622    0.3748811    0.5052894    0.4292016   -0.3590838
X61          X62          X63          X64          X65
-0.9300641   -1.0278887   -1.1818218   -1.0971797   -0.9459414
X66          X67          X68          X69         X610
-1.3690795   -1.1425678   -1.5309402   -1.3883549   -1.4603624
X611         X612         X613          X72          X73
-1.6763206   -1.3974092   -1.4864404    0.0246113    0.1144990
X74          X75          X76         X710         X711
-0.0932555    0.1635397   -0.1478095    0.2502030    0.1967970
X712         X713         X714
-0.2420215    0.2161411   -0.1963162

Degrees of Freedom: 49999 Total (i.e. Null);  49967 Residual
Null Deviance:	    15810
Residual Deviance: 15320 	AIC: 20480

Autrement dit, on supprime la troisième (âge du conducteur principal, par classes arbitraires) et la cinquième variable (puissance du véhicule) en gardant toutes les autres. Mais ici, si une variable n’a pas été retenue, c’est que globalement, elle n’apportait pas beaucoup d’information. Il serait toutefois possible de garder une information partielle, en gardant éventuellement certaines modalités. L’idée est de disjoncter la base, en créant des variables indicatrices par modalités. La base sera beaucoup plus grosse, et la sélection prendra alors beaucoup plus de temps,

> base2=data.frame(model.matrix( ~ 0+X1+X2+X3+X4+X5+X6+X7,
+ data=base1))
> base2$E=base1$E
> base2$N=base1$N
> reg2=glm(N~.-E+offset(log(E)),family="poisson",
+ data=base2)
>  step(reg2)
Start:  AIC=20492.67
N ~ (X1D + X1E + X2.3.10. + X2.10.101. + X3.22.45. + X3.45.101.
X4B + X4C + X4D + X4E + X4F + X55 + X56 + X57 + X58 + X59 +
X510 + X511 + X512 + X513 + X514 + X515 + X60 + X61 + X62 +
X63 + X64 + X65 + X66 + X67 + X68 + X69 + X610 + X611 + X612 +
X613 + X72 + X73 + X74 + X75 + X76 + X710 + X711 + X712 +
X713 + X714 + E) - E + offset(log(E))

Step:  AIC=20492.67
N ~ X1D + X2.3.10. + X2.10.101. + X3.22.45. + X3.45.101. + X4B
X4C + X4D + X4E + X4F + X55 + X56 + X57 + X58 + X59 + X510 +
X511 + X512 + X513 + X514 + X515 + X60 + X61 + X62 + X63 +
X64 + X65 + X66 + X67 + X68 + X69 + X610 + X611 + X612 +
X613 + X72 + X73 + X74 + X75 + X76 + X710 + X711 + X712 +
X713 + X714 + offset(log(E))

Df Deviance   AIC
- X4B         1    15304 20491
- X58         1    15304 20491
- X511        1    15304 20491
- X2.3.10.    1    15304 20491
- X72         1    15304 20491
- X513        1    15304 20491
- X512        1    15304 20491
- X515        1    15304 20491
- X74         1    15305 20491
- X3.45.101.  1    15305 20491
- X714        1    15305 20491
- X55         1    15305 20492
- X3.22.45.   1    15305 20492
- X711        1    15306 20492
- X76         1    15306 20492
- X59         1    15306 20492
<none>             15304 20493
- X514        1    15306 20493
- X713        1    15306 20493
- X73         1    15307 20493
- X56         1    15307 20493
- X710        1    15307 20494
- X75         1    15308 20494
- X2.10.101.  1    15308 20495
- X57         1    15309 20495
- X4C         1    15310 20496
- X510        1    15310 20496
- X60         1    15312 20498
- X4F         1    15314 20500
- X712        1    15316 20503
- X1D         1    15319 20506
- X4D         1    15337 20524
- X61         1    15345 20532
- X65         1    15350 20536
- X62         1    15352 20538
- X64         1    15359 20545
- X4E         1    15362 20549
- X63         1    15366 20553
- X67         1    15370 20556
- X612        1    15381 20568
- X69         1    15382 20569
- X66         1    15387 20574
- X610        1    15389 20576
- X68         1    15393 20580
- X611        1    15406 20592
- X613        1    15451 20637

Step:  AIC=20490.67
N ~ X1D + X2.3.10. + X2.10.101. + X3.22.45. + X3.45.101. + X4C
X4D + X4E + X4F + X55 + X56 + X57 + X58 + X59 + X510 + X511 +
X512 + X513 + X514 + X515 + X60 + X61 + X62 + X63 + X64 +
X65 + X66 + X67 + X68 + X69 + X610 + X611 + X612 + X613 +
X72 + X73 + X74 + X75 + X76 + X710 + X711 + X712 + X713 +
X714 + offset(log(E))

etc etc… et si on va directement à la fin,

Step:  AIC=20469.18
N ~ X1D + X2.10.101. + X4C + X4D + X4E + X4F + X57 + X510 + X60
X61 + X62 + X63 + X64 + X65 + X66 + X67 + X68 + X69 + X610 +
X611 + X612 + X613 + X73 + X75 + X76 + X710 + X712 + X713 +
offset(log(E))

Df Deviance   AIC
<none>             15315 20469
- X76         1    15317 20470
- X713        1    15317 20470
- X73         1    15317 20470
- X57         1    15318 20470
- X75         1    15318 20471
- X710        1    15319 20471
- X510        1    15319 20471
- X4C         1    15322 20474
- X60         1    15322 20475
- X2.10.101.  1    15325 20478
- X4F         1    15325 20478
- X1D         1    15333 20485
- X712        1    15338 20490
- X61         1    15356 20508
- X4D         1    15359 20511
- X62         1    15363 20515
- X65         1    15363 20515
- X64         1    15371 20524
- X63         1    15378 20530
- X67         1    15383 20536
- X4E         1    15390 20543
- X612        1    15394 20547
- X69         1    15396 20548
- X66         1    15400 20553
- X610        1    15403 20555
- X68         1    15407 20559
- X611        1    15419 20572
- X613        1    15467 20619

Call:  glm(formula = N ~ X1D + X2.10.101. + X4C + X4D + X4E + X4F
X57 + X510 + X60 + X61 + X62 + X63 + X64 + X65 + X66 + X67 +
X68 + X69 + X610 + X611 + X612 + X613 + X73 + X75 + X76 +
X710 + X712 + X713 + offset(log(E)), family = "poisson",
data = base2)

Coefficients:
(Intercept)          X1D   X2.10.101.          X4C          X4D
-1.20880      0.16886     -0.13808      0.14888      0.37539
X4E          X4F          X57         X510          X60
0.50458      0.42768      0.08381      0.18722     -0.36509
X61          X62          X63          X64          X65
-0.93836     -1.03471     -1.18803     -1.10217     -0.95624
X66          X67          X68          X69         X610
-1.37463     -1.15391     -1.54213     -1.40188     -1.47217
X611         X612         X613          X73          X75
-1.68559     -1.40582     -1.49700      0.10874      0.15022
X76         X710         X712         X713
-0.15183      0.21948     -0.27400      0.19565

Degrees of Freedom: 49999 Total (i.e. Null);  49971 Residual
Null Deviance:	    15810
Residual Deviance: 15310 	AIC: 20470

Si la troisième variable (âge du conducteur principal, par classes arbitraires) disparait assez vite, en revanche, une information sur la cinquième (la puissance) est gardée car certaines modalités semblent être informative sur la fréquence d’accidents. En revanche, on notera qui si on fait un arbre, la troisième variable était toujours clairement significative, ce qui peut nous conforter dans l’idée de faire de la sélection de variables sur les modalités.

> library(tree)
> TREE= tree(N~X1+X2+X3+X4+X5+X6+X7+offset(log(E)),split="gini",
+ mincut = 2500,data=base1)
> plot(TREE)
> text(TREE,cex=.9)