Category Archives: GLM

Actuariat de l’Assurance non-Vie #4

Lundi prochain, suite du cours d’actuariat de l’assurance non-vie. Nous avons terminé la partie sur la classification (modèle logistique, arbres, forêts, bagging, etc), et nous allons aborder la section sur la modélisation de la fréquence, et la régression de Poisson. Je rajouter quelques slides sur la présentation des GLM, qui seront utiles pour parler un peu de sur-dispersion,

 

Pricing Game, the results

Thursday, I will be in Paris, to discuss the results we got from the pricing game. I will present 12 ou 13 models sent to me, an discuss what happened when I created a market, where the models were competing. One or two models were clearly underestimating the losses, so with the results as they were send, each time, one company goy 80% market share and over 250% loss ratio. So I decided to normalize all the premiums, so that the average premium was the same, for all the companies. Slides are now available.

Predictive Modeling

Tomorrow, around noon, I will be giving a talk on predictive modeling for actuaries. In the introduction, I will get back shortly on the idea that a prediction is usually a best estimate, in the sense of getting an expected value. And because

https://latex.codecogs.com/gif.latex?\mathbb{E}(X)=\underset{c\in\mathbb{R}}{\text{argmin}}\{\mathbb{E}\left([X-c]^2\right)\}=\underset{c\in\mathbb{R}}{\text{argmin}}\{\mathbb{E}\left(||X-c||_{L_2}\right)\}

it is natural to use least square ideas. In order to illustrate all those concepts, we will use a simple dataset, with the sex, the height and the weight of a person, as well as declared weight.

Davis=read.table(
"http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")

Since there is a typo in this dataset, we have to invert to figures

Davis[12,c(2,3)]=Davis[12,c(3,2)]

but it’s not a big deal. The variable of interest, here, is someone’s weight

attach(Davis)
Y=weight*2.204622

(here in pounds). We will use explanatory variables such as the sex of that person, or his/her height

X=Davis$height / 30.48

(in inches). So, we will start with the (standard) linear model, just to make sure that we all talk about the same thing.

The goal will be to use (possible) explanatory variable to improve our prediction. We will start with the standard linear model, but we will see that nonlinear models can also easily be obtained,

Non linearities will be discussed. But those models are Gaussian (as mentioned above). And homoscedastic. So we will see how generalized linear models can be used to model the mean and the variance, at the same time. For instance, with a Poisson regression (below), the variance will increase with the expected value.

After this general introduction, we will spend some time on 0-1 variables. We will see how to use a logistic regression, and also discuss more generally which kind of models can be used for classification. ROC curves will be presented, and explained.

Then, we will also see an alternative to the logistic model, namely classification trees and CART techniques

We will also discuss random forrests, bagging and boosting techniques

pdf version of the slides can be downloaded.

Smoothing mortality rates

This morning, I was working with Julie, a student of mine, coming from Rennes, on mortality tables. Actually, we work on genealogical datasets from a small region in Québec, and we can observe a lot of volatiliy. If I borrow one of her graph, we get something like

Since we have some missing data, we wanted to use some Generalized Nonlinear Models. So let us see how to get a smooth estimator of the mortality surface.  We will write some code that we can use on our data later on (the dataset we have has been obtained after signing a lot of official documents, and I guess I cannot upload it here, even partially).

DEATH <- read.table(
"http://freakonometrics.free.fr/Deces-France.txt",
header=TRUE)
EXPO  <- read.table(
"http://freakonometrics.free.fr/Exposures-France.txt",
header=TRUE,skip=2)
library(gnm)
D=DEATH$Male
E=EXPO$Male
A=as.numeric(as.character(DEATH$Age))
Y=DEATH$Year
I=(A<100)
base=data.frame(D=D,E=E,Y=Y,A=A)
subbase=base[I,]
subbase=subbase[!is.na(subbase$A),]

The first idea can be to use a Poisson model, where the mortality rate is a smooth function of the age and the year, something like

that can be estimated using

library(mgcv)
regbsp=gam(D~s(A,Y,bs="cr")+offset(log(E)),data=subbase,family=quasipoisson)
predmodel=function(a,y) predict(regbsp,newdata=data.frame(A=a,Y=y,E=1))
vX=trunc(seq(0,99,length=41))
vY=trunc(seq(1900,2005,length=41))
vZ=outer(vX,vY,predmodel)
persp(vZ,theta=-30,col="green",shade=TRUE,xlab="Ages (0-100)",
ylab="Years (1900-2005)",zlab="Mortality rate (log)")

The mortality surface is here

It is also possible to extract the average value of the years, which is the interpretation of the  coefficient in the Lee-Carter model,

predAx=function(a) mean(predict(regbsp,newdata=data.frame(A=a,
Y=seq(min(subbase$Y),max(subbase$Y)),E=1)))
plot(seq(0,99),Vectorize(predAx)(seq(0,99)),col="red",lwd=3,type="l")

We have the following smoothed mortality rate

Recall that the Lee-Carter model is

where parameter estimates can be obtained using

regnp=gnm(D~factor(A)+Mult(factor(A),factor(Y))+offset(log(E)),
data=subbase,family=quasipoisson)
predmodel=function(a,y) predict(regnp,newdata=data.frame(A=a,Y=y,E=1))
vZ=outer(vX,vY,predmodel)
persp(vZ,theta=-30,col="green",shade=TRUE,xlab="Ages (0-100)",
ylab="Years (1900-2005)",zlab="Mortality rate (log)")

The (crude) mortality surface is

with the following  coefficients.

plot(seq(1,99),coefficients(regnp)[2:100],col="red",lwd=3,type="l")

Here we have a lot of coefficients, and unfortunately, on a smaller dataset, we have much more variability. Can we smooth our Lee-Carter model ? To get something which looks like

Actually, we can, and the code is rather simple

library(splines)
knotsA=c(20,40,60,80)
knotsY=c(1920,1945,1980,2000)
regsp=gnm(D~bs(subbase$A,knots=knotsA,Boundary.knots=range(subbase$A),degre=3)+
Mult(bs(subbase$A,knots=knotsA,Boundary.knots=range(subbase$A),degre=3),
 bs(subbase$Y,knots=knotsY,Boundary.knots=range(subbase$Y),degre=3))+
offset(log(E)),data=subbase, family=quasipoisson) 
BpA=bs(seq(0,99),knots=knotsA,Boundary.knots=range(subbase$A),degre=3) 
BpY=bs(seq(min(subbase$Y),max(subbase$Y)),knots=knotsY,Boundary.knots= range(subbase$Y),degre=3) 
predmodel=function(a,y) 
predict(regsp,newdata=data.frame(A=a,Y=y,E=1)) v
Z=outer(vX,vY,predmodel) 
persp(vZ,theta=-30,col="green",shade=TRUE,xlab="Ages (0-100)", 
ylab="Years (1900-2005)",zlab="Mortality rate (log)")

The mortality surface is now

and again, it is possible to extract the average mortality rate, as a function of the age, over the years,

BpA=bs(seq(0,99),knots=knotsA,Boundary.knots=range(subbase$A),degre=3)
Ax=BpA%*%coefficients(regsp)[2:8]
plot(seq(0,99),Ax,col="red",lwd=3,type="l")

We can then play with the smoothing parameters of the spline functions, and see the impact on the mortality surface

knotsA=seq(5,95,by=5)
knotsY=seq(1910,2000,by=10)
regsp=gnm(D~bs(A,knots=knotsA,Boundary.knots=range(subbase$A),degre=3)+
Mult(bs(A,knots=knotsA,Boundary.knots=range(subbase$A),degre=3),
bs(Y,knots=knotsY,Boundary.knots=range(subbase$Y),degre=3))
+offset(log(E)),data=subbase,family=quasipoisson)
predmodel=function(a,y) predict(regsp,newdata=data.frame(A=a,Y=y,E=1))
vZ=outer(vX,vY,predmodel)
persp(vZ,theta=-30,col="green",shade=TRUE,xlab="Ages (0-100)",
ylab="Years (1900-2005)",zlab="Mortality rate (log)")

We now have to use those functions our our small data sample ! That should be fun….

GLM, non-linearity and heteroscedasticity

Last week in the non-life insurance course, we’ve seen the theory of the Generalized Linear Models, emphasizing the two important components

  • the link function (which is actually the key component in predictive modeling)
  • the distribution, or the variance function

Just to illustrate, consider my favorite dataset

­lin.mod = lm(dist~speed,data=cars)

A linear model means here Y_i=\beta_0+\beta_1X_i+\varepsilon_i

where the residuals are assumed to be centered, independent, and with identical variance. If we visualize that linear regression, we usually see something like that

The idea here (in GLMs) is to assume Y\vertX=x\sim\mathcal{N}(\beta_0+\beta_1x,\sigma^2)

which will produce the same model as the one describe previously, based on some error term. That model can be visualized below,

attach(cars)
n=2
X= cars$speed 
Y=cars$dist
df=data.frame(X,Y)
vX=seq(min(X)-2,max(X)+2,length=n)
vY=seq(min(Y)-15,max(Y)+15,length=n)
mat=persp(vX,vY,matrix(0,n,n),zlim=c(0,.1),theta=-30,ticktype ="detailed", box = FALSE)
reggig=glm(Y~X,data=df,family=gaussian(link="identity"))
x=seq(min(X),max(X),length=501)
C=trans3d(x,predict(reggig,newdata=data.frame(X=x),type="response"),rep(0,length(x)),mat)
lines(C,lwd=2)
sdgig=sqrt(summary(reggig)$dispersion)
x=seq(min(X),max(X),length=501)
y1=qnorm(.95,predict(reggig,newdata=data.frame(X=x),type="response"), sdgig)
C=trans3d(x,y1,rep(0,length(x)),mat)
lines(C,lty=2)
y2=qnorm(.05,predict(reggig,newdata=data.frame(X=x),type="response"), sdgig)
C=trans3d(x,y2,rep(0,length(x)),mat)
lines(C,lty=2)
C=trans3d(c(x,rev(x)),c(y1,rev(y2)),rep(0,2*length(x)),mat)
polygon(C,border=NA,col="yellow")
C=trans3d(X,Y,rep(0,length(X)),mat)
points(C,pch=19,col="red")
n=8
vX=seq(min(X),max(X),length=n)
mgig=predict(reggig,newdata=data.frame(X=vX))
sdgig=sqrt(summary(reggig)$dispersion)
for(j in n:1){
stp=251
x=rep(vX[j],stp)
y=seq(min(min(Y)-15,qnorm(.05,predict(reggig,newdata=data.frame(X=vX[j]),type="response"), sdgig)),max(Y)+15,length=stp)
z0=rep(0,stp)
z=dnorm(y, mgig[j], sdgig)
C=trans3d(c(x,x),c(y,rev(y)),c(z,z0),mat)
polygon(C,border=NA,col="light blue",density=40)
C=trans3d(x,y,z0,mat)
lines(C,lty=2)
C=trans3d(x,y,z,mat)
lines(C,col="blue")}

We do have two parts here: the linear increase of the average, \mathbb{E}(Y\vert X=x)=\beta_0+\beta_1x and the constant variance of the normal distribution \text{Var}(Y\vert X=x)=\sigma^2.

On the other hand, if we assume a Poisson regression,

poisson.reg = glm(dist~speed,data=cars,family=poisson(link="log"))

we have something like

This time, two things have changed simultaneously: our model is no longer linear, it is an exponential one \mathbb{E}(Y\vert X=x)=e^{\beta_0+\beta_1x}, and the variance is also increasing with the explanatory variable \text{Var}(Y\vert X=x)=e^{\beta_0+\beta_1x}, since with a Poisson regression,
Y\vert X=x\sim\mathcal{P}(e^{\beta_0+\beta_1x})

If we adapt the previous code, we get

The problem is that we changed two things when we introduced the Poisson regression from the linear model. So let us look at what happens when we change the two components independently. First, we can change the link function, with a Gaussian model but this time a multiplicative model (with a logarithm link function)

gaussian.reg = glm(dist~speed,data=cars,family=gaussian(link="log"))

which is still, here, an homoscedasctic model, but this time non-linear. Or we can change the link function in the Poisson regression, to get a linear model, but heteroscedastic

poisson.lin = glm(dist~speed,data=cars,family=poisson(link="identity"))

So this is basically what GLMs are about….

Modélisation des coûts individuels

Cette semaine, même si le réseau de l’UQAM est down, on va continuer le cours et finir la section sur la modélisation de la surdispersion pour la fréquence de sinistres. On devrait ensuite commencer la modélisation des coûts individuels. En particulier, on passera du temps autour de deux points,

  • la distinction lognormale et gamma
  • l’écrêtement des gros sinistres

Les transparents sont en ligne. Et la base des coûts est celle évoquée au second cours.

Surdispersion et comptage

Cette semaine, au cours d’assurance non-vie, on abordera la surdispersion, qui clôturera la partie du cours sur la modélisation de la fréquence de sinistres. Les transparents sont en ligne. Mais avant de parler de surdispersion, on finira la présentation des GLM. Je mets un lien vers le chapitre 15 du livre de John Fox Applied regression analysis and generalized linear models ainsi que le livre de James K. Lindsey Applying Generalized Linear Models. Je voudrais aussi renvoyer vers les notes de cours de Germán Rodríguez, avec des notes sur la régression de Poisson (avec un petit complément sur la notion de overdispersion).

Les instructions pour le second devoir seront envoyées par courriel.

Regression on variables, or on categories?

I admit it, the title sounds weird. The problem I want to address this evening is related to the use of the stepwise procedure on a regression model, and to discuss the use of categorical variables (and possible misinterpreations). Consider the following dataset

> db = read.table("http://freakonometrics.free.fr/db2.txt",header=TRUE,sep=";")

First, let us change the reference in our categorical variable  (just to get an easier interpretation later on)

> db$X3=relevel(as.factor(db$X3),ref="E")

If we run a logistic regression on the three variables (two continuous, one categorical), we get

> reg=glm(Y~X1+X2+X3,family=binomial,data=db)
> summary(reg)

Call:
glm(formula = Y ~ X1 + X2 + X3, family = binomial, data = db)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-3.0758   0.1226   0.2805   0.4798   2.0345  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -5.39528    0.86649  -6.227 4.77e-10 ***
X1           0.51618    0.09163   5.633 1.77e-08 ***
X2           0.24665    0.05911   4.173 3.01e-05 ***
X3A         -0.09142    0.32970  -0.277   0.7816    
X3B         -0.10558    0.32526  -0.325   0.7455    
X3C          0.63829    0.37838   1.687   0.0916 .  
X3D         -0.02776    0.33070  -0.084   0.9331    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 806.29  on 999  degrees of freedom
Residual deviance: 582.29  on 993  degrees of freedom
AIC: 596.29

Number of Fisher Scoring iterations: 6

Now, if we use a stepwise procedure, to select variables in the model, we get

> step(reg)
Start:  AIC=596.29
Y ~ X1 + X2 + X3

       Df Deviance    AIC
- X3    4   587.81 593.81
<none>      582.29 596.29
- X2    1   600.56 612.56
- X1    1   617.25 629.25

Step:  AIC=593.81
Y ~ X1 + X2

       Df Deviance    AIC
<none>      587.81 593.81
- X2    1   606.90 610.90
- X1    1   622.44 626.44

So clearly, we should remove the categorical variable if our starting point was the regression on the three variables.

Now, what if we consider the same model, but slightly different: on the five categories,

> X3complete = model.matrix(~0+X3,data=db)
> db2 = data.frame(db,X3complete)
> head(db2)
  Y       X1       X2 X3 X3A X3B X3C X3D X3E
1 1 3.297569 16.25411  B   0   1   0   0   0
2 1 6.418031 18.45130  D   0   0   0   1   0
3 1 5.279068 16.61806  B   0   1   0   0   0
4 1 5.539834 19.72158  C   0   0   1   0   0
5 1 4.123464 18.38634  C   0   0   1   0   0
6 1 7.778443 19.58338  C   0   0   1   0   0

From a technical point of view, it is exactly the same as before, if we look at the regression,

> reg = glm(Y~X1+X2+X3A+X3B+X3C+X3D+X3E,family=binomial,data=db2)
> summary(reg)

Call:
glm(formula = Y ~ X1 + X2 + X3A + X3B + X3C + X3D + X3E, family = binomial, 
    data = db2)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-3.0758   0.1226   0.2805   0.4798   2.0345  

Coefficients: (1 not defined because of singularities)
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -5.39528    0.86649  -6.227 4.77e-10 ***
X1           0.51618    0.09163   5.633 1.77e-08 ***
X2           0.24665    0.05911   4.173 3.01e-05 ***
X3A         -0.09142    0.32970  -0.277   0.7816    
X3B         -0.10558    0.32526  -0.325   0.7455    
X3C          0.63829    0.37838   1.687   0.0916 .  
X3D         -0.02776    0.33070  -0.084   0.9331    
X3E               NA         NA      NA       NA    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 806.29  on 999  degrees of freedom
Residual deviance: 582.29  on 993  degrees of freedom
AIC: 596.29

Number of Fisher Scoring iterations: 6

Both regressions are equivalent. Now, what about a stepwise selection on this new model?

> step(reg)
Start:  AIC=596.29
Y ~ X1 + X2 + X3A + X3B + X3C + X3D + X3E

Step:  AIC=596.29
Y ~ X1 + X2 + X3A + X3B + X3C + X3D

       Df Deviance    AIC
- X3D   1   582.30 594.30
- X3A   1   582.37 594.37
- X3B   1   582.40 594.40
<none>      582.29 596.29
- X3C   1   585.21 597.21
- X2    1   600.56 612.56
- X1    1   617.25 629.25

Step:  AIC=594.3
Y ~ X1 + X2 + X3A + X3B + X3C

       Df Deviance    AIC
- X3A   1   582.38 592.38
- X3B   1   582.41 592.41
<none>      582.30 594.30
- X3C   1   586.30 596.30
- X2    1   600.58 610.58
- X1    1   617.27 627.27

Step:  AIC=592.38
Y ~ X1 + X2 + X3B + X3C

       Df Deviance    AIC
- X3B   1   582.44 590.44
<none>      582.38 592.38
- X3C   1   587.20 595.20
- X2    1   600.59 608.59
- X1    1   617.64 625.64

Step:  AIC=590.44
Y ~ X1 + X2 + X3C

       Df Deviance    AIC
<none>      582.44 590.44
- X3C   1   587.81 593.81
- X2    1   600.73 606.73
- X1    1   617.66 623.66

What do we get now? This time, the stepwise procedure recommends that we keep one category (namely C). So my point is simple: when running a stepwise procedure with factors, either we keep the factor as it is, or we drop it. If it is necessary to change the design, by pooling together some categories, and we forgot to do it, then it will be suggested to remove that variable, because having 4 categories meaning the same thing will cost us too much if we use the Akaike criteria. Because this is exactly what happens here

> library(car)
> reg = glm(formula = Y ~ X1 + X2 + X3, family = binomial, data = db)
> linearHypothesis(reg,c("X3A=X3B","X3A=X3D","X3A=0"))
Linear hypothesis test

Hypothesis:
X3A - X3B = 0
X3A - X3D = 0
X3A = 0

Model 1: restricted model
Model 2: Y ~ X1 + X2 + X3

  Res.Df Df  Chisq Pr(>Chisq)
1    996                     
2    993  3 0.1446      0.986

So here, we should pool together categories A, B, D and E (which was here the reference). As mentioned in a previous post, it is necessary to pool together categories that should be pulled together as soon as possible. If not, the stepwise procedure might yield to some misinterpretations.

Régression de Poisson

Mercredi, on finira les arbres de classification, et on commencera la modélisation de la fréquence de sinistre. Les transparents sont en ligne.

Comme annoncé lors du premier cours, je suggère de commencer la lecture du Practicionner’s Guide to Generalized Linear Models. Le document correspond au minimum attendu dans ce cours.

Exposure as a possible explanatory variable

Iin insurance pricing, the exposure is usually used as an offset variable to model claims frequency. As explained many times on this blog (e.g. here), and in my notes, if we have to identical drivers, but one with an exposure of 6 months, and the other one of one year, it should be natural to assume that, on average, the second driver will have two times more accidents. This is the motivation to use a standard (homogeneous) Poisson process to model claim frequency. One can also see here legal issue, since, in case of a (partial) reinbursement of a premium, it would be done prorata temporis. The risk is proportional to the exposure. Thus, if https://latex.codecogs.com/gif.latex?Y_i denote the number of claims of insured https://latex.codecogs.com/gif.latex?i, with characteristics https://latex.codecogs.com/gif.latex?\boldsymbol{X}_{i}=(X_{i,1},\cdots,X_{i,k}) and exposure https://latex.codecogs.com/gif.latex?E_i, with a Poisson regression, we would write

https://latex.codecogs.com/gif.latex?Y_i\sim\mathcal{P}(E_i\cdot%20\exp(\boldsymbol{X}_i%27\boldsymbol{\beta}))

or equivalently

https://latex.codecogs.com/gif.latex?Y_i\sim\mathcal{P}(\exp(\log(E_i)+\boldsymbol{X}_i%27\boldsymbol{\beta}))

From this expression, the logarithm of the exposure is an explanatory variable, but there should be no coefficient (the coefficient here is taken to be one). Can’t we use the exposure as an explanatory variable ? Will we get a unit parameter ?

Of course, in the context of ratemaking, it is probably not a relevant question, since actuaries are required to predict annual claim frequency (since insurance contract are supposed to provide a one year coverage). But it might be interesting to get a better understanding of why people might be leaving our portfolio (i.e. are cancelling their insurance policy before term, or not renew someday).

To be more specific and get a better understanding, consider the following model: consider a Poisson process to model claims arrival, and people dedicated to their insurance company (they never leave). Let us generate scenarios over twenty years

> n=983
> D1=as.Date("01/01/1993",'%d/%m/%Y')
> D2=as.Date("31/12/2013",'%d/%m/%Y')
> L=D1+0:(D2-D1)
> set.seed(1)
> arrival=sample(L,size=n,replace=TRUE)
> exposure=N=rep(NA,n)
> departure=rep(D2,n)
> set.seed(2)
> for(i in 1:n){
+   expo=D2-arrival[i]
+   w=0
+   while(max(w)<expo) w=c(w,max(w)+1+trunc(rexp(1,1/1000)))
+   exposure[i]=departure[i]-arrival[i]
+   N[i]=max(0,length(w)-2)}
> df=data.frame(N=N,E=exposure/365)

Here the expected time between claims is considered to be 1000 days. The (annual) intensity of the Poisson process is here

> 365/1000
[1] 0.365

so if we run a Poisson regression on the logarithm of the exposure (please feel free to had other covariates if you want, the example here is just to see what could happen when exposure is considered as a standard covariate), we should get a parameter close to

> log(365/1000)
[1] -1.007858

Here, the regression on a constant, with the offset variable is

> reg=glm(N~1+offset(log(E)),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ 1 + offset(log(E)), family = poisson, data = df)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-3.4145  -0.4673   0.2367   0.8770   3.6828  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -1.04233    0.02532  -41.17   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 1116.9  on 982  degrees of freedom
Residual deviance: 1116.9  on 982  degrees of freedom
AIC: 3282.9

Number of Fisher Scoring iterations: 5

which is consistent with what we just said. If we run the regression with the logarithm of the exposure as a possible explanatory variable, we would expect to have a coefficient close to 1. And indeed…

> reg=glm(N~log(E),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ log(E), family = poisson, data = df)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-3.0810  -0.8373  -0.1493   0.5676   3.9001  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -1.03350    0.08546  -12.09   <2e-16 ***
log(E)       1.00920    0.03292   30.66   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 2553.6  on 982  degrees of freedom
Residual deviance: 1064.2  on 981  degrees of freedom
AIC: 3762.7

Number of Fisher Scoring iterations: 5

If we keep the offset, and add the variable, we can see that it become useless (which is a test of a unit parameter, somehow)

> reg=glm(N~log(E)+offset(log(E)),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ log(E) + offset(log(E)), family = poisson, 
    data = df)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-3.0810  -0.8373  -0.1493   0.5676   3.9001  

Coefficients:
             Estimate Std. Error z value Pr(>|z|)    
(Intercept) -1.033503   0.085460 -12.093   <2e-16 ***
log(E)       0.009201   0.032920   0.279     0.78    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 1064.3  on 982  degrees of freedom
Residual deviance: 1064.2  on 981  degrees of freedom
AIC: 3762.7

Number of Fisher Scoring iterations: 5

Here, we do have pure Poisson processes, so exposure is crucial, since the parameter of the Poisson distribution is proportional to the exposure. But we cannot learn anything else from the exposure.

Consider some real data.

> head(baseFREQ)
  nocontrat exposition zone puissance agevehicule
1        27       0.87    C         7           0
2       115       0.72    D         5           0
3       121       0.05    C         6           0
4       142       0.90    C        10          10
5       155       0.12    C         7           0
6       186       0.83    C         5           0
  ageconducteur bonus marque carburant densite region nbre
1            56    50     12         D      93     13    0
2            45    50     12         E      54     13    0
3            37    55     12         D      11     13    0
4            42    50     12         D      93     13    0
5            59    50     12         E      73     13    0
6            75    50     12         E      42     13    0

What do we get if we consider a Poisson regression on the logarithm of the exposure ?

> reg=glm(nbre~log(exposition),data=baseFREQ,family=poisson)
> summary(reg)

Call:
glm(formula = nbre ~ log(exposition), family = poisson, data = baseFREQ)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.3988  -0.3388  -0.2786  -0.1981  12.9036  

Coefficients:
                Estimate Std. Error z value Pr(>|z|)    
(Intercept)     -2.83045    0.02822 -100.31   <2e-16 ***
log(exposition)  0.53950    0.02905   18.57   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 12931  on 49999  degrees of freedom
Residual deviance: 12475  on 49998  degrees of freedom
AIC: 16150

Number of Fisher Scoring iterations: 6

If we add the exposure to the offset, what’s happening ? (let us use a nonparametric transformation, so visualize what’s going on)

> library(gam)
> reg=gam(nbre~offset(log(exposition))+s(exposition),data=baseFREQ,family=poisson)
> plot(reg,se=TRUE)

There is a clear and significant effect. The more insured stay, the less likely they get a claim. Actually, it can be observed without running a regression.

> i1=which(baseFREQ$nbre>0)
> i0=which(baseFREQ$nbre==0)
> h1=hist(baseFREQ$exposition[i1],probability=TRUE)
> h0=hist(baseFREQ$exposition[i0],probability=TRUE)
> plot(h1$mids,h1$density,type='s',lwd=2,col="red")
> lines(h0$mids,h0$density,type='s',col='blue',lwd=2)

In blue, we have the density of the exposure for those who did not have claims, and in red, the density of those who did have one claim (or more)

So here, we cannot assume a unit value for the parameter. What does that mean ? Can we reproduce such a behavior ?

In order to get a better understandung, consider two possible behaviors for the insured. The first one will be the following : if the company does not offer substantial discounts after no several years with no claims, the insured might leave the company. For instance, if the insured has no claim during 5 years, then after 5 years, he will leave the company (to get a better price somewhere else, say). The code will be

> for(i in 1:n){
+   expo=D2-arrival[i]
+   w=c(0,0)
+   while((max(w)<expo) & (max(diff(w))<1500)) w=c(w,max(w)+trunc(rexp(1,1/1000)))
+   if(max(diff(w))>1500) departure[i]=arrival[i]+max(w[-length(w)])+1500
+   exposure[i]=departure[i]-arrival[i]
+   N[i]=max(0,length(w)-3)}
> df=data.frame(N=N,E=exposure/365)

Here, I consider 1500 days, instead of 5 years,, but it is the same idea. So, what do we have here ?

> reg=glm(N~log(E),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ log(E), family = poisson, data = df)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.5684  -0.9668  -0.2321   0.4244   3.6265  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -2.50844    0.10286  -24.39   <2e-16 ***
log(E)       1.65738    0.04494   36.88   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 2567.31  on 982  degrees of freedom
Residual deviance:  885.71  on 981  degrees of freedom
AIC: 2897.9

Here, the coefficient is (significantly) larger than 1. More precisely,

> reg=glm(N~log(E)+offset(log(E)),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ log(E) + offset(log(E)), family = poisson, 
    data = df)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.5684  -0.9668  -0.2321   0.4244   3.6265  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -2.50844    0.10286  -24.39   <2e-16 ***
log(E)       0.65738    0.04494   14.63   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 1114.24  on 982  degrees of freedom
Residual deviance:  885.71  on 981  degrees of freedom
AIC: 2897.9

There is clearly a bias here : people staying long are more like likely to have an accident. Which is consistent with our story, since clients with low risks left.

The second behavior will be the following : sometimes, the insured are not satisfied with the way claims are handled, and they might leave after the first claim. Consider the case where, after one claim, it is likely (e.g. with probability 50%) that the insured leaves the company. Instead of assuming that the insured did not like claims management, consider the case were the car is so damaged that he cannot drive it anymore. So it will be useless to pay an insurance premium. The code here will be

> for(i in 1:n){
+   expo=D2-arrival[i]
+   w=0
+   stay=TRUE
+   while((max(w)<expo) & (stay==TRUE)) { w=c(w,max(w)+trunc(rexp(1,1/1000)))
+   stay=sample(c(TRUE,FALSE),prob=c(.5,.5),size=1)}
+   N[i]=length(w)-2
+   if(stay==FALSE) {departure[i]=arrival[i]+max(w)
+   N[i]=length(w)-1}
+   exposure[i]=departure[i]-arrival[i]}
> df=data.frame(N=N,E=exposure/365)

Here, after each claim, the insured toss a coin to see if he cancels the contract, or not.

> reg=glm(N~log(E),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ log(E), family = poisson, data = df)

Deviance Residuals: 
     Min        1Q    Median        3Q       Max  
-2.28402  -0.47763  -0.08215   0.33819   2.37628  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept)  0.09920    0.04251   2.334   0.0196 *  
log(E)       0.30640    0.02511  12.203   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 666.92  on 982  degrees of freedom
Residual deviance: 498.29  on 981  degrees of freedom
AIC: 2666.3

This time, the parameter is (again significantly) smaller than one.

> reg=glm(N~log(E)+offset(log(E)),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ log(E) + offset(log(E)), family = poisson, 
    data = df)

Deviance Residuals: 
     Min        1Q    Median        3Q       Max  
-2.28402  -0.47763  -0.08215   0.33819   2.37628  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept)  0.09920    0.04251   2.334   0.0196 *  
log(E)      -0.69360    0.02511 -27.625   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 1116.87  on 982  degrees of freedom
Residual deviance:  498.29  on 981  degrees of freedom
AIC: 2666.3

The story is now rather different, since those who stay long should not have encountered a lot of opportunities to leave. So clearly, they did not have much claims. If someone has a long exposure, the negative sign in the output above means that he should not have much claims, on average.

As we can see, those models produce rather difference outputs. Note that it is possible much more interpretations. For instance, depending on the way data were extracted,

  • all policies observed, over those twenty years,
  • all policies in force at some specific date, until now
  • all policies in force at some specific date, until one year after
  • all policies in force now

So far, we have been using the first method, but the other ones will yield different interpretations, e.g. because of survivor bias. But that’s another story… And one can read Boucher and Denuit (2008) to go further.

Poisson regression on non-integers

In the course on claims reserving techniques, I did mention the use of Poisson regression, even if incremental payments were not integers. For instance, we did consider incremental triangles

>  source("https://perso.univ-rennes1.fr/arthur.charpentier/bases.R")
>  INC=PAID
>  INC[,2:6]=PAID[,2:6]-PAID[,1:5]
>  INC
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,] 3209 1163   39   17    7   21
[2,] 3367 1292   37   24   10   NA
[3,] 3871 1474   53   22   NA   NA
[4,] 4239 1678  103   NA   NA   NA
[5,] 4929 1865   NA   NA   NA   NA
[6,] 5217   NA   NA   NA   NA   NA

On those payments, it is natural to use a Poisson regression, to predict future payments

>  Y=as.vector(INC)
>  D=rep(1:6,each=6)
>  A=rep(2001:2006,6)
>  base=data.frame(Y,D,A)
>  reg=glm(Y~as.factor(D)+as.factor(A),data=base,family=poisson(link="log"))
>  Yp=predict(reg,type="response",newdata=base)
>  matrix(Yp,6,6)
       [,1]   [,2] [,3] [,4] [,5] [,6]
[1,] 3155.6 1202.1 49.8 19.1  8.2 21.0
[2,] 3365.6 1282.0 53.1 20.4  8.7 22.3
[3,] 3863.7 1471.8 60.9 23.4 10.0 25.7
[4,] 4310.0 1641.8 68.0 26.1 11.2 28.6
[5,] 4919.8 1874.1 77.6 29.8 12.8 32.7
[6,] 5217.0 1987.3 82.3 31.6 13.5 34.7

and the total amount of reserves would be

>  sum(Yp[is.na(Y)==TRUE])
[1] 2426.985

Here, payments were in ‘000 euros. What if they were in ‘000’000 euros ?

> a=1000
> INC/a
      [,1]  [,2]  [,3]  [,4]  [,5]  [,6]
[1,] 3.209 1.163 0.039 0.017 0.007 0.021
[2,] 3.367 1.292 0.037 0.024 0.010    NA
[3,] 3.871 1.474 0.053 0.022    NA    NA
[4,] 4.239 1.678 0.103    NA    NA    NA
[5,] 4.929 1.865    NA    NA    NA    NA
[6,] 5.217    NA    NA    NA    NA    NA

We can still run a regression here

> reg=glm((Y/a)~as.factor(D)+as.factor(A),data=base,family=poisson(link="log"))
> Yp=predict(reg,type="response",newdata=base)
> sum(Yp[is.na(Y)==TRUE])*a
[1] 2426.985

and the prediction is exactly the same. Actually, it is possible to change currency, and multiply by any kind of constant, the Poisson regression will return always the same prediction, if we use a log link function,

>  homogeneity=function(a=1){
+  reg=glm((Y/a)~as.factor(D)+as.factor(A), data=base,family=poisson(link="log"))
+  Yp=predict(reg,type="response",newdata=base)
+  return(sum(Yp[is.na(Y)==TRUE])*a)
+  }
>  Vectorize(homogeneity)(10^(seq(-3,5)))
[1] 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985

The trick here come from the fact that we do like the Poisson interpretation. But GLMs simply mean that we do want to solve a first order condition. It is possible to solve explicitly the first order condition, which was obtained without any condition such that values should be integers. To run a simple code, the intercept should be related to the last value of the matrix, not the first one.

> base$D=relevel(as.factor(base$D),"6")
> base$A=relevel(as.factor(base$A),"2006")
> reg=glm(Y~as.factor(D)+as.factor(A), data=base,family=poisson(link="log"))
> summary(reg)

Call:
glm(formula = Y ~ as.factor(D) + as.factor(A), family = poisson(link = "log"), 
    data = base)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-2.3426  -0.4996   0.0000   0.2770   3.9355  

Coefficients:
                 Estimate Std. Error z value Pr(>|z|)    
(Intercept)       3.54723    0.21921  16.182  < 2e-16 ***
as.factor(D)1     5.01244    0.21877  22.912  < 2e-16 ***
as.factor(D)2     4.04731    0.21896  18.484  < 2e-16 ***
as.factor(D)3     0.86391    0.22827   3.785 0.000154 ***
as.factor(D)4    -0.09254    0.25229  -0.367 0.713754    
as.factor(D)5    -0.93717    0.32643  -2.871 0.004092 ** 
as.factor(A)2001 -0.50271    0.02079 -24.179  < 2e-16 ***
as.factor(A)2002 -0.43831    0.02045 -21.433  < 2e-16 ***
as.factor(A)2003 -0.30029    0.01978 -15.184  < 2e-16 ***
as.factor(A)2004 -0.19096    0.01930  -9.895  < 2e-16 ***
as.factor(A)2005 -0.05864    0.01879  -3.121 0.001799 ** 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 46695.269  on 20  degrees of freedom
Residual deviance:    30.214  on 10  degrees of freedom
  (15 observations deleted due to missingness)
AIC: 209.52

The first idea is to run a gradient descent, as follows (the starting point will be coefficients from a linear regression on the log of the observations),

> YNA <- Y
> XNA=matrix(0,length(Y),1+5+5)
> XNA[,1]=rep(1,length(Y))
>   for(k in 1:5) XNA[(k-1)*6+1:6,k+1]=k
>   u=(1:(length(Y))%%6); u[u==0]=6
>   for(k in 1:5) XNA[u==k,k+6]=k 
>     YnoNA=YNA[is.na(YNA)==FALSE]
>     XnoNA=XNA[is.na(YNA)==FALSE,]
>     beta=lm(log(YnoNA)~0+XnoNA)$coefficients
>     for(s in 1:50){
+     Ypred=exp(XnoNA%*%beta)
+     gradient=t(XnoNA)%*%(YnoNA-Ypred)
+     omega=matrix(0,nrow(XnoNA),nrow(XnoNA));diag(omega)=exp(XnoNA%*%beta) 
+     hessienne=-t(XnoNA)%*%omega%*%XnoNA
+     beta=beta-solve(hessienne)%*%gradient}
> beta
             [,1]
 [1,]  3.54723486
 [2,]  5.01244294
 [3,]  2.02365553
 [4,]  0.28796945
 [5,] -0.02313601
 [6,] -0.18743467
 [7,] -0.50271242
 [8,] -0.21915742
 [9,] -0.10009587
[10,] -0.04774056
[11,] -0.01172840

We are not too far away from the values given by R. Actually, it is just fine if we focus on the predictions

> matrix(exp(XNA%*%beta),6,6))
       [,1]   [,2] [,3] [,4] [,5] [,6]
[1,] 3155.6 1202.1 49.8 19.1  8.2 21.0
[2,] 3365.6 1282.0 53.1 20.4  8.7 22.3
[3,] 3863.7 1471.8 60.9 23.4 10.0 25.7
[4,] 4310.0 1641.8 68.0 26.1 11.2 28.6
[5,] 4919.8 1874.1 77.6 29.8 12.8 32.7
[6,] 5217.0 1987.3 82.3 31.6 13.5 34.7

which are exactly the one obtained above. And here, we clearly see that there is no assumption such as “explained variate should be an integer“. It is also possible to remember that the first order condition is the same as the one we had with a weighted least square model. The problem is that the weights are function of the prediction. But using an iterative algorithm, we should converge,

> beta=lm(log(YnoNA)~0+XnoNA)$coefficients
>  for(i in 1:50){
+ Ypred=exp(XnoNA%*%beta)
+  z=XnoNA%*%beta+(YnoNA-Ypred)/Ypred
+  REG=lm(z~0+XnoNA,weights=Ypred)
+  beta=REG$coefficients
+ }
> 
> beta
     XnoNA1      XnoNA2      XnoNA3      XnoNA4      XnoNA5      XnoNA6
 3.54723486  5.01244294  2.02365553  0.28796945 -0.02313601 -0.18743467
     XnoNA7      XnoNA8      XnoNA9     XnoNA10     XnoNA11 
-0.50271242 -0.21915742 -0.10009587 -0.04774056 -0.01172840

which are the same values as the one we got previously. Here again, the prediction is the same as the one we got from this so-called Poisson regression,

> matrix(exp(XNA%*%beta),6,6)
       [,1]   [,2] [,3] [,4] [,5] [,6]
[1,] 3155.6 1202.1 49.8 19.1  8.2 20.9
[2,] 3365.6 1282.0 53.1 20.4  8.7 22.3
[3,] 3863.7 1471.8 60.9 23.4 10.0 25.7
[4,] 4310.0 1641.8 68.0 26.1 11.2 28.6
[5,] 4919.8 1874.1 77.6 29.8 12.8 32.7
[6,] 5217.0 1987.3 82.3 31.6 13.5 34.7

Again, it works just fine because GLMs are mainly conditions on the first two moments, and numerical computations are based on the first order condition, which has less constraints than the interpretation in terms of a Poisson model.

Overdispersed Poisson et bootstrap

Pour le dernier cours sur les méthodes de provisionnement, on s’est arrête aux méthodes par simulation. Reprenons là où on en était resté au dernier billet où on avait vu qu’en faisant une régression de Poisson sur les incréments, on obtenait exactement le même montant que la méthode Chain Ladder,

> Y
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,] 3209 1163   39   17    7   21
[2,] 3367 1292   37   24   10   NA
[3,] 3871 1474   53   22   NA   NA
[4,] 4239 1678  103   NA   NA   NA
[5,] 4929 1865   NA   NA   NA   NA
[6,] 5217   NA   NA   NA   NA   NA

> y=as.vector(as.matrix(Y))
> base=data.frame(y,ai=rep(2000:2005,n),bj=rep(0:(n-1),each=n))
> reg2=glm(y~as.factor(ai)+as.factor(bj),data=base,family=poisson) 
> summary(reg2)

Call:
glm(formula = y ~ as.factor(ai) + as.factor(bj), family = poisson, 
    data = base)

Coefficients:
                  Estimate Std. Error z value Pr(>|z|)    
(Intercept)        8.05697    0.01551 519.426  < 2e-16 ***
as.factor(ai)2001  0.06440    0.02090   3.081  0.00206 ** 
as.factor(ai)2002  0.20242    0.02025   9.995  < 2e-16 ***
as.factor(ai)2003  0.31175    0.01980  15.744  < 2e-16 ***
as.factor(ai)2004  0.44407    0.01933  22.971  < 2e-16 ***
as.factor(ai)2005  0.50271    0.02079  24.179  < 2e-16 ***
as.factor(bj)1    -0.96513    0.01359 -70.994  < 2e-16 ***
as.factor(bj)2    -4.14853    0.06613 -62.729  < 2e-16 ***
as.factor(bj)3    -5.10499    0.12632 -40.413  < 2e-16 ***
as.factor(bj)4    -5.94962    0.24279 -24.505  < 2e-16 ***
as.factor(bj)5    -5.01244    0.21877 -22.912  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 46695.269  on 20  degrees of freedom
Residual deviance:    30.214  on 10  degrees of freedom
  (15 observations deleted due to missingness)
AIC: 209.52

Number of Fisher Scoring iterations: 4

> base$py2=predict(reg2,newdata=base,type="response")
> round(matrix(base$py2,n,n),1)
       [,1]   [,2] [,3] [,4] [,5] [,6]
[1,] 3155.7 1202.1 49.8 19.1  8.2 21.0
[2,] 3365.6 1282.1 53.1 20.4  8.8 22.4
[3,] 3863.7 1471.8 61.0 23.4 10.1 25.7
[4,] 4310.1 1641.9 68.0 26.1 11.2 28.7
[5,] 4919.9 1874.1 77.7 29.8 12.8 32.7
[6,] 5217.0 1987.3 82.4 31.6 13.6 34.7

> sum(base$py2[is.na(base$y)])
[1] 2426.985

Le plus intéressant est peut être de noter que la loi de Poisson présente probablement trop peu de variance…

> reg2b=glm(y~as.factor(ai)+as.factor(bj),data=base,family=quasipoisson)
> summary(reg2)

Call:
glm(formula = y ~ as.factor(ai) + as.factor(bj), family = quasipoisson, 
    data = base)

Coefficients:
                  Estimate Std. Error t value Pr(>|t|)    
(Intercept)        8.05697    0.02769 290.995  < 2e-16 ***
as.factor(ai)2001  0.06440    0.03731   1.726 0.115054    
as.factor(ai)2002  0.20242    0.03615   5.599 0.000228 ***
as.factor(ai)2003  0.31175    0.03535   8.820 4.96e-06 ***
as.factor(ai)2004  0.44407    0.03451  12.869 1.51e-07 ***
as.factor(ai)2005  0.50271    0.03711  13.546 9.28e-08 ***
as.factor(bj)1    -0.96513    0.02427 -39.772 2.41e-12 ***
as.factor(bj)2    -4.14853    0.11805 -35.142 8.26e-12 ***
as.factor(bj)3    -5.10499    0.22548 -22.641 6.36e-10 ***
as.factor(bj)4    -5.94962    0.43338 -13.728 8.17e-08 ***
as.factor(bj)5    -5.01244    0.39050 -12.836 1.55e-07 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for quasipoisson family taken to be 3.18623)

    Null deviance: 46695.269  on 20  degrees of freedom
Residual deviance:    30.214  on 10  degrees of freedom
  (15 observations deleted due to missingness)
AIC: NA

Number of Fisher Scoring iterations: 4

Mais on en reparlera dans un instant. Ensuite, on avait commencé à regarder erreurs commises, sur la partie supérieure du triangle. Classiquement, par construction, les résidus de Pearson sont de la forme

https://latex.codecogs.com/gif.latex?\varepsilon_i=\frac{Y_i-\widehat{Y}_i}{\sqrt{\text{Var}(Y_i)}}

On avait vu dans le cours de tarification que la variance au dénominateur pouvait être remplacé par le prévision, puisque dans un modèle de Poisson, l’espérance et la variance sont identiques. Donc on considérait

https://latex.codecogs.com/gif.latex?\varepsilon_i=\frac{Y_i-\widehat{Y}_i}{\sqrt{\widehat{Y}_i}}

> base$erreur=(base$y-base$py2)/sqrt(base$py2)
> round(matrix(base$erreur,n,n),1)
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,]  0.9 -1.1 -1.5 -0.5 -0.4    0
[2,]  0.0  0.3 -2.2  0.8  0.4   NA
[3,]  0.1  0.1 -1.0 -0.3   NA   NA
[4,] -1.1  0.9  4.2   NA   NA   NA
[5,]  0.1 -0.2   NA   NA   NA   NA
[6,]  0.0   NA   NA   NA   NA   NA

Le soucis est que si https://latex.codecogs.com/gif.latex?\widehat{Y}_i est – asymptotiquement – un bon estimateur pour https://latex.codecogs.com/gif.latex?\text{Var}(Y_i), ce n’est pas le cas à distance finie, car on a alors un estimateur biaisé pour la variance, et donc la variance des résidus n’a que peu de chances d’être de variance unitaire. Aussi, il convient de corriger l’estimateur de la variance, et on pose alors

https://latex.codecogs.com/gif.latex?\varepsilon_i=\sqrt{\frac{n}{n-k}}\cdot\frac{Y_i-\widehat{Y}_i}{\sqrt{\widehat{Y}_i}}

qui sont alors les résidus de Pearson tel qu’on doit les utiliser.

> E=base$erreur[is.na(base$y)==FALSE]*sqrt(21/(21-11))
> E
 [1]  1.374976e+00  3.485024e-02  1.693203e-01 -1.569329e+00  1.887862e-01
 [6] -1.459787e-13 -1.634646e+00  4.018940e-01  8.216186e-02  1.292578e+00
[11] -3.058764e-01 -2.221573e+00 -3.207593e+00 -1.484151e+00  6.140566e+00
[16] -7.100321e-01  1.149049e+00 -4.307387e-01 -6.196386e-01  6.000048e-01
[21] -8.987734e-15
> boxplot(E,horizontal=TRUE)

En rééchantillonnant dans ces résidus, on peut alors générer un pseudo triangle. Pour des raisons de simplicités, on va générer un peu rectangle, et se restreindre à la partie supérieure,

> Eb=sample(E,size=36,replace=TRUE)
> Yb=base$py2+Eb*sqrt(base$py2)
> Ybna=Yb
> Ybna[is.na(base$y)]=NA
> Tb=matrix(Ybna,n,n)
> round(matrix(Tb,n,n),1)
       [,1]   [,2] [,3] [,4] [,5] [,6]
[1,] 3115.8 1145.4 58.9 46.0  6.4 26.9
[2,] 3179.5 1323.2 54.5 21.3 12.2   NA
[3,] 4245.4 1448.1 61.0  7.9   NA   NA
[4,] 4312.4 1581.7 68.7   NA   NA   NA
[5,] 4948.1 1923.9   NA   NA   NA   NA
[6,] 4985.3     NA   NA   NA   NA   NA

Cette fois, on a un nouveau triangle ! on va alors pouvoir faire plusieurs choses,

  1. compléter le triangle para la méthode Chain Ladder, c’est à dire calculer les montants moyens que l’on pense payer dans les années futures
  2. générer des scénarios de paiements pour les années futurs, en générant des paiements suivant des lois de Poisson (centrées sur les montants moyens que l’on vient de calculer)
  3. générer des scénarios de paiements avec des lois présentant plus de variance que la loi de Poisson. Idéalement, on voudrait simuler des lois qusi-Poisson, mais ce ne sont pas de vraies lois. Par contre, on peut se rappeler que dans ce cas, la loi Gamma devrait donner une bonne approximation.

Pour ce dernier point, on va utiliser le code suivant, pour générer des quasi lois,

> rqpois = function(n, lambda, phi, roundvalue = TRUE) {
+ b = phi
+ a = lambda/phi
+ r = rgamma(n, shape = a, scale = b)
+ if(roundvalue){r=round(r)}
+ return(r)
+ }

Je renvois aux diverses notes de cours pour plus de détails sur la justification, ou à un vieux billet. On va alors faire une petite fonction, qui va soit somme les paiements moyens futurs, soit sommer des générations de scénarios de paiements, à partir d’un triangle,

> CL=function(Tri){
+ y=as.vector(as.matrix(Tri))
+ base=data.frame(y,ai=rep(2000:2005,n),bj=rep(0:(n-1),each=n))
+ reg=glm(y~as.factor(ai)+as.factor(bj),data=base,family=quasipoisson)
+ py2=predict(reg,newdata=base,type="response")
+ pys=rpois(36,py2)
+ pysq=rqpois(36,py2,phi=3.18623)
+ return(list(
+ cl=sum(py2[is.na(base$y)]),
+ sc=sum(pys[is.na(base$y)]),
+ scq=sum(pysq[is.na(base$y)])))
+ }

Reste alors à générer des paquets de triangles. Toutefois, il est possible de générer des triangles avec des incréments négatifs. Pour faire simple, on mettra des valeurs nulles quand on a un paiement négatif. L’impact sur les quantiles sera alors (a priori) négligeable.

> for(s in 1:1000){
+ Eb=sample(E,size=36,replace=TRUE)*sqrt(21/(21-11))
+ Yb=base$py2+Eb*sqrt(base$py2)
+ Yb=pmax(Yb,0)
+ scY=rpois(36,Yb)
+ Ybna=Yb
+ Ybna[is.na(base$y)]=NA
+ Tb=matrix(Ybna,6,6)
+ C=CL(Tb)
+ VCL[s]=C$cl
+ VR[s]=C$sc
+ VRq[s]=C$scq
+ }

Si on regarde la distribution du best estimate, on obtient

> hist(VCL,proba=TRUE,col="light blue",border="white",ylim=c(0,0.003))
> boxplot(VCL,horizontal=TRUE,at=.0023,boxwex = 0.0006,add=TRUE,col="light green")
> D=density(VCL)
> lines(D)
> I=which(D$x<=quantile(VCL,.05))
> polygon(c(D$x[I],rev(D$x[I])),c(D$y[I],rep(0,length(I))),col="blue",border=NA)
> I=which(D$x>=quantile(VCL,.95))
> polygon(c(D$x[I],rev(D$x[I])),c(D$y[I],rep(0,length(I))),col="blue",border=NA)

Mais on peut aussi visualiser des scénarios basés sur des lois de Poisson (équidispersé) ou des scénarios de lois quasiPoisson (surdispersées), ci-dessous

Dans ce dernier cas, on peut en déduire le quantile à 99% des paiements à venir.

> quantile(VRq,.99)
    99% 
2855.01

Il faut donc augmenter le montant de provisions de l’ordre 15% pour s’assurer que la compagnie pourra satisfaire ses engagements dans 99% des cas,

> quantile(VRq,.99)-2426.985
    99% 
428.025