Category Archives: Hiver

Poisson regression on non-integers

In the course on claims reserving techniques, I did mention the use of Poisson regression, even if incremental payments were not integers. For instance, we did consider incremental triangles

>  source("https://perso.univ-rennes1.fr/arthur.charpentier/bases.R")
>  INC=PAID
>  INC[,2:6]=PAID[,2:6]-PAID[,1:5]
>  INC
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,] 3209 1163   39   17    7   21
[2,] 3367 1292   37   24   10   NA
[3,] 3871 1474   53   22   NA   NA
[4,] 4239 1678  103   NA   NA   NA
[5,] 4929 1865   NA   NA   NA   NA
[6,] 5217   NA   NA   NA   NA   NA

On those payments, it is natural to use a Poisson regression, to predict future payments

>  Y=as.vector(INC)
>  D=rep(1:6,each=6)
>  A=rep(2001:2006,6)
>  base=data.frame(Y,D,A)
>  reg=glm(Y~as.factor(D)+as.factor(A),data=base,family=poisson(link="log"))
>  Yp=predict(reg,type="response",newdata=base)
>  matrix(Yp,6,6)
       [,1]   [,2] [,3] [,4] [,5] [,6]
[1,] 3155.6 1202.1 49.8 19.1  8.2 21.0
[2,] 3365.6 1282.0 53.1 20.4  8.7 22.3
[3,] 3863.7 1471.8 60.9 23.4 10.0 25.7
[4,] 4310.0 1641.8 68.0 26.1 11.2 28.6
[5,] 4919.8 1874.1 77.6 29.8 12.8 32.7
[6,] 5217.0 1987.3 82.3 31.6 13.5 34.7

and the total amount of reserves would be

>  sum(Yp[is.na(Y)==TRUE])
[1] 2426.985

Here, payments were in ‘000 euros. What if they were in ‘000’000 euros ?

> a=1000
> INC/a
      [,1]  [,2]  [,3]  [,4]  [,5]  [,6]
[1,] 3.209 1.163 0.039 0.017 0.007 0.021
[2,] 3.367 1.292 0.037 0.024 0.010    NA
[3,] 3.871 1.474 0.053 0.022    NA    NA
[4,] 4.239 1.678 0.103    NA    NA    NA
[5,] 4.929 1.865    NA    NA    NA    NA
[6,] 5.217    NA    NA    NA    NA    NA

We can still run a regression here

> reg=glm((Y/a)~as.factor(D)+as.factor(A),data=base,family=poisson(link="log"))
> Yp=predict(reg,type="response",newdata=base)
> sum(Yp[is.na(Y)==TRUE])*a
[1] 2426.985

and the prediction is exactly the same. Actually, it is possible to change currency, and multiply by any kind of constant, the Poisson regression will return always the same prediction, if we use a log link function,

>  homogeneity=function(a=1){
+  reg=glm((Y/a)~as.factor(D)+as.factor(A), data=base,family=poisson(link="log"))
+  Yp=predict(reg,type="response",newdata=base)
+  return(sum(Yp[is.na(Y)==TRUE])*a)
+  }
>  Vectorize(homogeneity)(10^(seq(-3,5)))
[1] 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985

The trick here come from the fact that we do like the Poisson interpretation. But GLMs simply mean that we do want to solve a first order condition. It is possible to solve explicitly the first order condition, which was obtained without any condition such that values should be integers. To run a simple code, the intercept should be related to the last value of the matrix, not the first one.

> base$D=relevel(as.factor(base$D),"6")
> base$A=relevel(as.factor(base$A),"2006")
> reg=glm(Y~as.factor(D)+as.factor(A), data=base,family=poisson(link="log"))
> summary(reg)

Call:
glm(formula = Y ~ as.factor(D) + as.factor(A), family = poisson(link = "log"), 
    data = base)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-2.3426  -0.4996   0.0000   0.2770   3.9355  

Coefficients:
                 Estimate Std. Error z value Pr(>|z|)    
(Intercept)       3.54723    0.21921  16.182  < 2e-16 ***
as.factor(D)1     5.01244    0.21877  22.912  < 2e-16 ***
as.factor(D)2     4.04731    0.21896  18.484  < 2e-16 ***
as.factor(D)3     0.86391    0.22827   3.785 0.000154 ***
as.factor(D)4    -0.09254    0.25229  -0.367 0.713754    
as.factor(D)5    -0.93717    0.32643  -2.871 0.004092 ** 
as.factor(A)2001 -0.50271    0.02079 -24.179  < 2e-16 ***
as.factor(A)2002 -0.43831    0.02045 -21.433  < 2e-16 ***
as.factor(A)2003 -0.30029    0.01978 -15.184  < 2e-16 ***
as.factor(A)2004 -0.19096    0.01930  -9.895  < 2e-16 ***
as.factor(A)2005 -0.05864    0.01879  -3.121 0.001799 ** 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 46695.269  on 20  degrees of freedom
Residual deviance:    30.214  on 10  degrees of freedom
  (15 observations deleted due to missingness)
AIC: 209.52

The first idea is to run a gradient descent, as follows (the starting point will be coefficients from a linear regression on the log of the observations),

> YNA <- Y
> XNA=matrix(0,length(Y),1+5+5)
> XNA[,1]=rep(1,length(Y))
>   for(k in 1:5) XNA[(k-1)*6+1:6,k+1]=k
>   u=(1:(length(Y))%%6); u[u==0]=6
>   for(k in 1:5) XNA[u==k,k+6]=k 
>     YnoNA=YNA[is.na(YNA)==FALSE]
>     XnoNA=XNA[is.na(YNA)==FALSE,]
>     beta=lm(log(YnoNA)~0+XnoNA)$coefficients
>     for(s in 1:50){
+     Ypred=exp(XnoNA%*%beta)
+     gradient=t(XnoNA)%*%(YnoNA-Ypred)
+     omega=matrix(0,nrow(XnoNA),nrow(XnoNA));diag(omega)=exp(XnoNA%*%beta) 
+     hessienne=-t(XnoNA)%*%omega%*%XnoNA
+     beta=beta-solve(hessienne)%*%gradient}
> beta
             [,1]
 [1,]  3.54723486
 [2,]  5.01244294
 [3,]  2.02365553
 [4,]  0.28796945
 [5,] -0.02313601
 [6,] -0.18743467
 [7,] -0.50271242
 [8,] -0.21915742
 [9,] -0.10009587
[10,] -0.04774056
[11,] -0.01172840

We are not too far away from the values given by R. Actually, it is just fine if we focus on the predictions

> matrix(exp(XNA%*%beta),6,6))
       [,1]   [,2] [,3] [,4] [,5] [,6]
[1,] 3155.6 1202.1 49.8 19.1  8.2 21.0
[2,] 3365.6 1282.0 53.1 20.4  8.7 22.3
[3,] 3863.7 1471.8 60.9 23.4 10.0 25.7
[4,] 4310.0 1641.8 68.0 26.1 11.2 28.6
[5,] 4919.8 1874.1 77.6 29.8 12.8 32.7
[6,] 5217.0 1987.3 82.3 31.6 13.5 34.7

which are exactly the one obtained above. And here, we clearly see that there is no assumption such as “explained variate should be an integer“. It is also possible to remember that the first order condition is the same as the one we had with a weighted least square model. The problem is that the weights are function of the prediction. But using an iterative algorithm, we should converge,

> beta=lm(log(YnoNA)~0+XnoNA)$coefficients
>  for(i in 1:50){
+ Ypred=exp(XnoNA%*%beta)
+  z=XnoNA%*%beta+(YnoNA-Ypred)/Ypred
+  REG=lm(z~0+XnoNA,weights=Ypred)
+  beta=REG$coefficients
+ }
> 
> beta
     XnoNA1      XnoNA2      XnoNA3      XnoNA4      XnoNA5      XnoNA6
 3.54723486  5.01244294  2.02365553  0.28796945 -0.02313601 -0.18743467
     XnoNA7      XnoNA8      XnoNA9     XnoNA10     XnoNA11 
-0.50271242 -0.21915742 -0.10009587 -0.04774056 -0.01172840

which are the same values as the one we got previously. Here again, the prediction is the same as the one we got from this so-called Poisson regression,

> matrix(exp(XNA%*%beta),6,6)
       [,1]   [,2] [,3] [,4] [,5] [,6]
[1,] 3155.6 1202.1 49.8 19.1  8.2 20.9
[2,] 3365.6 1282.0 53.1 20.4  8.7 22.3
[3,] 3863.7 1471.8 60.9 23.4 10.0 25.7
[4,] 4310.0 1641.8 68.0 26.1 11.2 28.6
[5,] 4919.8 1874.1 77.6 29.8 12.8 32.7
[6,] 5217.0 1987.3 82.3 31.6 13.5 34.7

Again, it works just fine because GLMs are mainly conditions on the first two moments, and numerical computations are based on the first order condition, which has less constraints than the interpretation in terms of a Poisson model.

Overdispersed Poisson et bootstrap

Pour le dernier cours sur les méthodes de provisionnement, on s’est arrête aux méthodes par simulation. Reprenons là où on en était resté au dernier billet où on avait vu qu’en faisant une régression de Poisson sur les incréments, on obtenait exactement le même montant que la méthode Chain Ladder,

> Y
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,] 3209 1163   39   17    7   21
[2,] 3367 1292   37   24   10   NA
[3,] 3871 1474   53   22   NA   NA
[4,] 4239 1678  103   NA   NA   NA
[5,] 4929 1865   NA   NA   NA   NA
[6,] 5217   NA   NA   NA   NA   NA

> y=as.vector(as.matrix(Y))
> base=data.frame(y,ai=rep(2000:2005,n),bj=rep(0:(n-1),each=n))
> reg2=glm(y~as.factor(ai)+as.factor(bj),data=base,family=poisson) 
> summary(reg2)

Call:
glm(formula = y ~ as.factor(ai) + as.factor(bj), family = poisson, 
    data = base)

Coefficients:
                  Estimate Std. Error z value Pr(>|z|)    
(Intercept)        8.05697    0.01551 519.426  < 2e-16 ***
as.factor(ai)2001  0.06440    0.02090   3.081  0.00206 ** 
as.factor(ai)2002  0.20242    0.02025   9.995  < 2e-16 ***
as.factor(ai)2003  0.31175    0.01980  15.744  < 2e-16 ***
as.factor(ai)2004  0.44407    0.01933  22.971  < 2e-16 ***
as.factor(ai)2005  0.50271    0.02079  24.179  < 2e-16 ***
as.factor(bj)1    -0.96513    0.01359 -70.994  < 2e-16 ***
as.factor(bj)2    -4.14853    0.06613 -62.729  < 2e-16 ***
as.factor(bj)3    -5.10499    0.12632 -40.413  < 2e-16 ***
as.factor(bj)4    -5.94962    0.24279 -24.505  < 2e-16 ***
as.factor(bj)5    -5.01244    0.21877 -22.912  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 46695.269  on 20  degrees of freedom
Residual deviance:    30.214  on 10  degrees of freedom
  (15 observations deleted due to missingness)
AIC: 209.52

Number of Fisher Scoring iterations: 4

> base$py2=predict(reg2,newdata=base,type="response")
> round(matrix(base$py2,n,n),1)
       [,1]   [,2] [,3] [,4] [,5] [,6]
[1,] 3155.7 1202.1 49.8 19.1  8.2 21.0
[2,] 3365.6 1282.1 53.1 20.4  8.8 22.4
[3,] 3863.7 1471.8 61.0 23.4 10.1 25.7
[4,] 4310.1 1641.9 68.0 26.1 11.2 28.7
[5,] 4919.9 1874.1 77.7 29.8 12.8 32.7
[6,] 5217.0 1987.3 82.4 31.6 13.6 34.7

> sum(base$py2[is.na(base$y)])
[1] 2426.985

Le plus intéressant est peut être de noter que la loi de Poisson présente probablement trop peu de variance…

> reg2b=glm(y~as.factor(ai)+as.factor(bj),data=base,family=quasipoisson)
> summary(reg2)

Call:
glm(formula = y ~ as.factor(ai) + as.factor(bj), family = quasipoisson, 
    data = base)

Coefficients:
                  Estimate Std. Error t value Pr(>|t|)    
(Intercept)        8.05697    0.02769 290.995  < 2e-16 ***
as.factor(ai)2001  0.06440    0.03731   1.726 0.115054    
as.factor(ai)2002  0.20242    0.03615   5.599 0.000228 ***
as.factor(ai)2003  0.31175    0.03535   8.820 4.96e-06 ***
as.factor(ai)2004  0.44407    0.03451  12.869 1.51e-07 ***
as.factor(ai)2005  0.50271    0.03711  13.546 9.28e-08 ***
as.factor(bj)1    -0.96513    0.02427 -39.772 2.41e-12 ***
as.factor(bj)2    -4.14853    0.11805 -35.142 8.26e-12 ***
as.factor(bj)3    -5.10499    0.22548 -22.641 6.36e-10 ***
as.factor(bj)4    -5.94962    0.43338 -13.728 8.17e-08 ***
as.factor(bj)5    -5.01244    0.39050 -12.836 1.55e-07 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for quasipoisson family taken to be 3.18623)

    Null deviance: 46695.269  on 20  degrees of freedom
Residual deviance:    30.214  on 10  degrees of freedom
  (15 observations deleted due to missingness)
AIC: NA

Number of Fisher Scoring iterations: 4

Mais on en reparlera dans un instant. Ensuite, on avait commencé à regarder erreurs commises, sur la partie supérieure du triangle. Classiquement, par construction, les résidus de Pearson sont de la forme

https://latex.codecogs.com/gif.latex?\varepsilon_i=\frac{Y_i-\widehat{Y}_i}{\sqrt{\text{Var}(Y_i)}}

On avait vu dans le cours de tarification que la variance au dénominateur pouvait être remplacé par le prévision, puisque dans un modèle de Poisson, l’espérance et la variance sont identiques. Donc on considérait

https://latex.codecogs.com/gif.latex?\varepsilon_i=\frac{Y_i-\widehat{Y}_i}{\sqrt{\widehat{Y}_i}}

> base$erreur=(base$y-base$py2)/sqrt(base$py2)
> round(matrix(base$erreur,n,n),1)
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,]  0.9 -1.1 -1.5 -0.5 -0.4    0
[2,]  0.0  0.3 -2.2  0.8  0.4   NA
[3,]  0.1  0.1 -1.0 -0.3   NA   NA
[4,] -1.1  0.9  4.2   NA   NA   NA
[5,]  0.1 -0.2   NA   NA   NA   NA
[6,]  0.0   NA   NA   NA   NA   NA

Le soucis est que si https://latex.codecogs.com/gif.latex?\widehat{Y}_i est – asymptotiquement – un bon estimateur pour https://latex.codecogs.com/gif.latex?\text{Var}(Y_i), ce n’est pas le cas à distance finie, car on a alors un estimateur biaisé pour la variance, et donc la variance des résidus n’a que peu de chances d’être de variance unitaire. Aussi, il convient de corriger l’estimateur de la variance, et on pose alors

https://latex.codecogs.com/gif.latex?\varepsilon_i=\sqrt{\frac{n}{n-k}}\cdot\frac{Y_i-\widehat{Y}_i}{\sqrt{\widehat{Y}_i}}

qui sont alors les résidus de Pearson tel qu’on doit les utiliser.

> E=base$erreur[is.na(base$y)==FALSE]*sqrt(21/(21-11))
> E
 [1]  1.374976e+00  3.485024e-02  1.693203e-01 -1.569329e+00  1.887862e-01
 [6] -1.459787e-13 -1.634646e+00  4.018940e-01  8.216186e-02  1.292578e+00
[11] -3.058764e-01 -2.221573e+00 -3.207593e+00 -1.484151e+00  6.140566e+00
[16] -7.100321e-01  1.149049e+00 -4.307387e-01 -6.196386e-01  6.000048e-01
[21] -8.987734e-15
> boxplot(E,horizontal=TRUE)

En rééchantillonnant dans ces résidus, on peut alors générer un pseudo triangle. Pour des raisons de simplicités, on va générer un peu rectangle, et se restreindre à la partie supérieure,

> Eb=sample(E,size=36,replace=TRUE)
> Yb=base$py2+Eb*sqrt(base$py2)
> Ybna=Yb
> Ybna[is.na(base$y)]=NA
> Tb=matrix(Ybna,n,n)
> round(matrix(Tb,n,n),1)
       [,1]   [,2] [,3] [,4] [,5] [,6]
[1,] 3115.8 1145.4 58.9 46.0  6.4 26.9
[2,] 3179.5 1323.2 54.5 21.3 12.2   NA
[3,] 4245.4 1448.1 61.0  7.9   NA   NA
[4,] 4312.4 1581.7 68.7   NA   NA   NA
[5,] 4948.1 1923.9   NA   NA   NA   NA
[6,] 4985.3     NA   NA   NA   NA   NA

Cette fois, on a un nouveau triangle ! on va alors pouvoir faire plusieurs choses,

  1. compléter le triangle para la méthode Chain Ladder, c’est à dire calculer les montants moyens que l’on pense payer dans les années futures
  2. générer des scénarios de paiements pour les années futurs, en générant des paiements suivant des lois de Poisson (centrées sur les montants moyens que l’on vient de calculer)
  3. générer des scénarios de paiements avec des lois présentant plus de variance que la loi de Poisson. Idéalement, on voudrait simuler des lois qusi-Poisson, mais ce ne sont pas de vraies lois. Par contre, on peut se rappeler que dans ce cas, la loi Gamma devrait donner une bonne approximation.

Pour ce dernier point, on va utiliser le code suivant, pour générer des quasi lois,

> rqpois = function(n, lambda, phi, roundvalue = TRUE) {
+ b = phi
+ a = lambda/phi
+ r = rgamma(n, shape = a, scale = b)
+ if(roundvalue){r=round(r)}
+ return(r)
+ }

Je renvois aux diverses notes de cours pour plus de détails sur la justification, ou à un vieux billet. On va alors faire une petite fonction, qui va soit somme les paiements moyens futurs, soit sommer des générations de scénarios de paiements, à partir d’un triangle,

> CL=function(Tri){
+ y=as.vector(as.matrix(Tri))
+ base=data.frame(y,ai=rep(2000:2005,n),bj=rep(0:(n-1),each=n))
+ reg=glm(y~as.factor(ai)+as.factor(bj),data=base,family=quasipoisson)
+ py2=predict(reg,newdata=base,type="response")
+ pys=rpois(36,py2)
+ pysq=rqpois(36,py2,phi=3.18623)
+ return(list(
+ cl=sum(py2[is.na(base$y)]),
+ sc=sum(pys[is.na(base$y)]),
+ scq=sum(pysq[is.na(base$y)])))
+ }

Reste alors à générer des paquets de triangles. Toutefois, il est possible de générer des triangles avec des incréments négatifs. Pour faire simple, on mettra des valeurs nulles quand on a un paiement négatif. L’impact sur les quantiles sera alors (a priori) négligeable.

> for(s in 1:1000){
+ Eb=sample(E,size=36,replace=TRUE)*sqrt(21/(21-11))
+ Yb=base$py2+Eb*sqrt(base$py2)
+ Yb=pmax(Yb,0)
+ scY=rpois(36,Yb)
+ Ybna=Yb
+ Ybna[is.na(base$y)]=NA
+ Tb=matrix(Ybna,6,6)
+ C=CL(Tb)
+ VCL[s]=C$cl
+ VR[s]=C$sc
+ VRq[s]=C$scq
+ }

Si on regarde la distribution du best estimate, on obtient

> hist(VCL,proba=TRUE,col="light blue",border="white",ylim=c(0,0.003))
> boxplot(VCL,horizontal=TRUE,at=.0023,boxwex = 0.0006,add=TRUE,col="light green")
> D=density(VCL)
> lines(D)
> I=which(D$x<=quantile(VCL,.05))
> polygon(c(D$x[I],rev(D$x[I])),c(D$y[I],rep(0,length(I))),col="blue",border=NA)
> I=which(D$x>=quantile(VCL,.95))
> polygon(c(D$x[I],rev(D$x[I])),c(D$y[I],rep(0,length(I))),col="blue",border=NA)

Mais on peut aussi visualiser des scénarios basés sur des lois de Poisson (équidispersé) ou des scénarios de lois quasiPoisson (surdispersées), ci-dessous

Dans ce dernier cas, on peut en déduire le quantile à 99% des paiements à venir.

> quantile(VRq,.99)
    99% 
2855.01

Il faut donc augmenter le montant de provisions de l’ordre 15% pour s’assurer que la compagnie pourra satisfaire ses engagements dans 99% des cas,

> quantile(VRq,.99)-2426.985
    99% 
428.025

Réassurrance

Mercredi aura lieu le dernier cours d’actuariat IARD.

Parmi les compléments, Introduction à la réassurance, publié par Swiss Re, ou ainsi que quelques documents plus techniques, comme The Pareto model in property reinsurance , Exposure rating, ou Designing property reinsurance programmes encore Introduction to reinsurance accounting. Plusieurs réassureurs (et courtiers en réassurance) publient des études techniques sur leurs sites, http://swissre.com/http://munichre.com/, http://aon.com/, http://scor.com/ ou encore http://guycarp.com/. Sinon je renvois aux notes de cours de Peter Antal, quantitative methods in reinsurance.

Les transparents sont en ligne ici,

Sur les sinistres les plus chers, pour les compagnies d’assurance et de réassurance, http://businessinsider.com/… donne le classement suivant en dollars de 2010 (on pourra aussi consulter http://media.swissre.com/…)
  1. Hurricane Katrina (US, Bahamas, Cuba, Aug. 2005), $ 72.3 billion
  2. Tōhoku earthquake and tsunami (Japan, March 2011), $ 35 billion
  3. Hurricane Andrew (US, Bahamas, August 1992), $ 25 billion
  4. September 11 attacks (US) $ 23.1 billion
  5. Northridge earthquake (US) $ 20.6 billion
  6. Hurricane Ike (US, Haiti, Dominican Republic, Sept. 2005) $ 20.5 billion
  7. Hurricane Ivan (US, Barbados, Sept. 2004) $ 14.9 billion
  8. Hurrican Wilman (US, Mexico, Jamaica, Oct. 2005), $ 14 billion
  9. Hurricane Rita (US, Cuba, Sept. 2005) $ 11.3 billion
  10. Hurricane Charley (US, Cuba, Jamaica) $ 9.3. billion

A titre de comparaison, les chiffres d’affaires des plus gros réassureurs (prime émise en 2010) étaient, selon http://www.insurancenetworking.com/…

  1. Munich Reinsurance Company $ 31.3 billion
  2. Swiss Reinsurance Company Limited $ 24.7 billion
  3. Hannover Rueckversicherung AG $ 15.1 billion
  4. Berkshire Hathaway Inc. $ 14.4 billion
  5. Lloyd’s $ 13 billion
  6. SCOR S.E. $  8.8 billion
  7. Reinsurance Group of America Inc. $ 7.2 billion
  8. Allianz S.E. $ 5.7 billion
  9. PartnerRe Ltd. $ 4.9 billion
  10. Everest Re Group Ltd. $ 4.2 billion

Reserving with negative increments in triangles

A few months ago, I did published a post on negative values in triangles, and how to deal with them, when using a Poisson regression (the post was published in French). The idea was to use a translation technique:

  1. Fit a model not on https://latex.codecogs.com/gif.latex?Y_i‘s but on https://latex.codecogs.com/gif.latex?Y_i^{(k)}=Y_i+k, for some https://latex.codecogs.com/gif.latex?k\geq%200,
  2. Use that model to make predictions, and then translate those predictions, https://latex.codecogs.com/gif.latex?\widehat{Y}_i^{(k)}-k

This is what was done to get the following graph, where a Poisson regression was fitted. Black points are https://latex.codecogs.com/gif.latex?Y_i‘s while blue points are https://latex.codecogs.com/gif.latex?\widehat{Y}_i^{(k)}‘s, for some https://latex.codecogs.com/gif.latex?k\geq%200. We fit a model to get the blue prediction, and then translate it to get the red prediction (on the https://latex.codecogs.com/gif.latex?Y_i‘s).
http://freakonometrics.blog.free.fr/public/perso4/glm-translation.gif

In this example, there were no negative values, but it is possible to use it get a better understanding on the impact of this technique. The prediction, here, is the red line. And clearly, the value of https://latex.codecogs.com/gif.latex?k has an impact on the prediction (since we do not consider, here, a linear model: with a linear model, translating has not impact at all, except on the intercept).

The alternative mentioned in the previous post was to use this technique on several https://latex.codecogs.com/gif.latex?k‘s, and them interpolate

  1. For a given https://latex.codecogs.com/gif.latex?k, fit a model not on https://latex.codecogs.com/gif.latex?Y_i‘s but on https://latex.codecogs.com/gif.latex?Y_i^{(k)}=Y_i+k, use that model to make predictions, and then translate those predictions, https://latex.codecogs.com/gif.latex?\widehat{Y}_i^{(k)}-k.
  2. Do it for several https://latex.codecogs.com/gif.latex?k‘s.
  3. Use it to extrapolate when https://latex.codecogs.com/gif.latex?k is https://latex.codecogs.com/gif.latex?0 (which is the case we are interested in).

In the context of loss reserving, the idea is extremely simple. Consider a triangle with incremental payments

> source("https://perso.univ-rennes1.fr/arthur.charpentier/bases.R")
> Y=T=PAID
> n=ncol(T)
> Y[,2:n]=T[,2:n]-T[,1:(n-1)]   
> Y
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,] 3209 1163   39   17    7   21
[2,] 3367 1292   37   24   10   NA
[3,] 3871 1474   53   22   NA   NA
[4,] 4239 1678  103   NA   NA   NA
[5,] 4929 1865   NA   NA   NA   NA
[6,] 5217   NA   NA   NA   NA   NA

Now, we do not have negative values, here, but we can still see is translation techniques can be used. The benchmark is the Poisson regression, since we can run it :

> y=as.vector(as.matrix(Y))
> base=data.frame(y,ai=rep(2000:2005,n),bj=rep(0:(n-1),each=n))
> reg=glm(y~as.factor(ai)+as.factor(bj),data=base,family=poisson)

Here, the amount is reserve is the sum of predicted values in the lower part of the triangle,

> py=predict(reg,newdata=base,type="response")
> sum(py[is.na(base$y)])
[1] 2426.985

which is exactly Chain Ladder’s estimate.

Now, let us use a translation technique to compute the amount of reserves. The code will be

> decal=function(k){
+ reg=glm(y+k~as.factor(ai)+as.factor(bj),data=base,family=poisson)
+ py=predict(reg,newdata=base,type="response")
+ return(sum(py[is.na(base$y)]-k))

For instance, if we translate of +5, we would get

> decal(5)
[1] 2454.713

while a translation of +10 would return

> decal(10)
[1] 2482.29

Clearly, translations do have an impact on the estimation. Here, just to check, if we do not translate, we do have Chain Ladder’s estimate,

> decal(0)
[1] 2426.985

The idea mentioned in the previous post was to try several translations, and then extrapolate, to get the value in 0. Here, translations will give the following estimates

> K=10:20
> (V=Vectorize(decal)(K))
 [1] 2482.290 2487.788 2493.279 2498.765 2504.245 2509.719 2515.187 2520.649
 [9] 2526.106 2531.557 2537.001

We can plot those values, and run a regression

> plot(K,V,xlim=c(0,20),ylim=c(2425,2540))
> abline(h=decal(0),col="red",lty=2)

the dotted horizontal line is Chain Ladder. Now, let us extrapolate

> b=data.frame(K=K,D=V)
> rk=lm(D~K,data=b)
> predict(rk,newdata=data.frame(K=0))
       1 
2427.623

On has to admit that it is not that bad. But yesterday evening, Karim asked me why I did use a linear regression, for my extrapolation. And to be honest, I do not know. I mean, the only answer might be that points are almost on a straight line. So the first time I saw it, I was exited, and I ran a linear regression.

Now, let us see if we can do better. Because here, we do use a translation of +10 or +20 (which might be rather small). What if we use much larger values ? (because we might have large negative incremental values). With the following code, we try, each time 11 consecutive values, the smallest one going from 0 to 50,

> hausse=1:50; res=rep(NA,50)
> for(k in hausse){
+ VK=k:(10+k)
+ b=data.frame(K=VK,D=Vectorize(decal)(VK))
+ rk=lm(D~K,data=b)
+ res[k]=predict(rk,newdata=data.frame(K=0))
+ }     
> plot(hausse,res,type="l",col="red",ylim=c(2422,2440))
> abline(rk,col="blue")

Here, we compute reserves when extrapolations were done after 11 translations, from https://latex.codecogs.com/gif.latex?k to https://latex.codecogs.com/gif.latex?k+10.  With different values of https://latex.codecogs.com/gif.latex?k. The case where https://latex.codecogs.com/gif.latex?k is ten was the one mentioned above,

> res[hausse==10]
[1] 2427.623

Actually, it might also be possible to consider not 11 translations, but 26, from https://latex.codecogs.com/gif.latex?k to https://latex.codecogs.com/gif.latex?k+25. Here, we get

> hausse=1:50; res=rep(NA,50)
> for(k in hausse){
+ VK=k:(25+k)
+ b=data.frame(K=VK,D=Vectorize(decal)(VK))
+ rk=lm(D~K,data=b)
+ res[k]=predict(rk,newdata=data.frame(K=0))
+ }   
> lines(hausse,res,type="l",col="blue",lty=2)

We now have the dotted line

Here, it is getting worst. So let us keep here 11 translations. Perhaps, we can try something different. For instance a Poisson regression, with a log like (i.e. we consider an exponential extrapolation),

> hausse=1:50; res=rep(NA,50)
> for(k in hausse){
+ VK=k:(10+k)
+ b=data.frame(K=VK,D=Vectorize(decal)(VK))
+ rk=glm(D~K,data=b,family=poisson)
+ res[k]=predict(rk,newdata=data.frame(K=0),type="response")
+ }         
> lines(hausse,res,type="l",col="purple")

The purple line will be a Poisson model, with a log link. Perhaps we can try another link function, like a quadratic one

> hausse=1:50; res=rep(NA,50)
> for(k in hausse){
+ VK=k:(10+k)
+ b=data.frame(K=VK,D=Vectorize(decal)(VK))
+ rk=glm(D~K,data=b,family=poisson(link=
+ power(lambda = 2)))
+ res[k]=predict(rk,newdata=data.frame(K=0),type="response")
+ }     
> lines(hausse,res,type="l",col="orange")

That would be the orange line,

Here, we need a link function between identity (the linear model, the blue line) and the quadratic one (the orange one), for instance a power function 3/2,

> hausse=1:50; res=rep(NA,50)
> for(k in hausse){
+ VK=k:(10+k)
+ b=data.frame(K=VK,D=Vectorize(decal)(VK))
+ rk=glm(D~K,data=b,family=poisson(link=
+ power(lambda = 1.5)))
+ res[k]=predict(rk,newdata=data.frame(K=0),type="response")
+ }         
> lines(hausse,res,type="l",col="green")

Here, it looks like we can use that model for any kind of translation, from +10 till +50, even +100 ! But I do not have any intuition about the use of this power function…

Chain Ladder, avec R

Un billet rapide pour mettre en ligne des parties du code tapé en cours, mercredi dernier. On avait commencé par convertir la feuille du classeur excel en un fichier texte, pour faciliter la lecture,

> setwd("C:\\Users\\savsalledecours\\Desktop")
> triangle=read.table("exACT2040.csv",header=TRUE,sep=";")	
> triangle
  ANNEE   X0   X1   X2   X3   X4   X5
1  2000 3209 4372 4411 4428 4435 4456
2  2001 3367 4659 4696 4720 4730   NA
3  2002 3871 5345 5398 5420   NA   NA
4  2003 4239 5917 6020   NA   NA   NA
5  2004 4929 6794   NA   NA   NA   NA
6  2005 5217   NA   NA   NA   NA   NA

L’idée – quand on importe un triangle – est de récupérer une base sous la forme précédente, avec des valeurs manquantes dans la partie inférieure du triangle (on verra l’intérêt quand on fait une régression). On avait ensuite calculé les facteurs de transition, et en même temps complété le triangle,

> T=triangle[,2:7]
> rownames(T)=triangle$ANNEE
> T2=T
> n=ncol(T)
> L=rep(NA,n-1)
> for(j in 1:(n-1)){
+ L[j]=sum(T[1:(n-j),j+1])/sum(T[1:(n-j),j])
+ T2[(n-j+1):n,j+1]=L[j]*T2[(n-j+1):n,j]
+ }

Les facteurs de transition sont ici,

> L
[1] 1.380933 1.011433 1.004343 1.001858 1.004735

et le triangle complété

> T2
       X0       X1       X2       X3       X4       X5
2000 3209 4372.000 4411.000 4428.000 4435.000 4456.000
2001 3367 4659.000 4696.000 4720.000 4730.000 4752.397
2002 3871 5345.000 5398.000 5420.000 5430.072 5455.784
2003 4239 5917.000 6020.000 6046.147 6057.383 6086.065
2004 4929 6794.000 6871.672 6901.518 6914.344 6947.084
2005 5217 7204.327 7286.691 7318.339 7331.939 7366.656

Le montant de provision est ici en faisant la différence entre la charge ultime (dans la dernière colonne) et les derniers paiements observés (sur la seconde diagonale)

> CU=T2[,n]
> Pat=diag(as.matrix(T2[,n:1]))
> Ri=CU-Pat
> R=sum(Ri)

soit, numériquement

> R
[1] 2426.985

On avait alors vu que l’on pouvait calculer un tail factor, en supposant une décroissance exponentielle des facteurs de transition, et on rajoutait alors une colonne correspondant au montant ultime, par année d’accident,

> logL=log(L-1)
> t=1:5
> b=data.frame(logL,t)
> reg=lm(logL~t,data=b)
> logLp=predict(reg,newdata=data.frame(t=6:100))
> Lp=exp(logLp)+1
> Linf=prod(Lp)
> T3=T2
> T3$Xinf=T3$X5*Linf

On a ici

> T3
       X0       X1       X2       X3       X4       X5     Xinf
2000 3209 4372.000 4411.000 4428.000 4435.000 4456.000 4459.149
2001 3367 4659.000 4696.000 4720.000 4730.000 4752.397 4755.755
2002 3871 5345.000 5398.000 5420.000 5430.072 5455.784 5459.639
2003 4239 5917.000 6020.000 6046.147 6057.383 6086.065 6090.366
2004 4929 6794.000 6871.672 6901.518 6914.344 6947.084 6951.993
2005 5217 7204.327 7286.691 7318.339 7331.939 7366.656 7371.862

(je laisse reprendre le code pour calculer le montant de provisions). Enfin, on avait montré comment utiliser une régression pondérée, pour calculer les facteurs de transition,

> T4=as.matrix(T$X0,n,1)
> for(j in 1:(n-1)){
+ Y=T[,j+1]
+ X=T[,j]
+ base=data.frame(X,Y)
+ reg=lm(Y~0+X,weights=1/X)
+ T4=cbind(T4,
+ predict(reg,
+ newdata=data.frame(X=T4[,j]
+ )))
+ }

Ce qui donnait la même projection que la méthode Chain Ladder

> T4
  [,1]     [,2]     [,3]     [,4]     [,5]     [,6]
1 3209 4431.414 4482.076 4501.543 4509.909 4531.263
2 3367 4649.601 4702.758 4723.184 4731.961 4754.367
3 3871 5345.591 5406.705 5430.188 5440.279 5466.039
4 4239 5853.775 5920.698 5946.414 5957.464 5985.673
5 4929 6806.619 6884.435 6914.337 6927.186 6959.986
6 5217 7204.327 7286.691 7318.339 7331.939 7366.656

La suite mercredi prochain, même si on risque d’aller très vite sur la méthode de Mack (et les calculs d’erreur quadratique moyenne pour arriver à la régression de Poisson). A suivre donc…

Solvabilité et provisionnement

Mercredi, nous allons aborder en cours les aspects de solvabilité des compagnies d’assurance IARD. Plus particulièrement, nous parlerons des provisions pour sinistres à payer, ou “provision for claims outstanding (PCO)” selon la terminologie anglaise, i.e. “the estimated total cost of ultimate settlement of all claims incurred before the date of record, whether reported or not, less any amounts already paid out in respect thereof.” Je renvoie à la lecture de Le contrôle de la solvabilité des compagnies d’assurance  en ligne sur le site de l’OCDE, pour une vision globale des approches de ces provisions. La SOA avait publié un rapport en 2009, Comparison of Incurred But Not Reported IBNR Methods que j’encourage à lire.

Nous aborderons mercredi les triangles. Parmi les triangles que nous manipulerons

> source("https://perso.univ-rennes1.fr/arthur.charpentier/bases.R")

qui contient plusieurs fichiers, dont

> PAID
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,] 3209 4372 4411 4428 4435 4456
[2,] 3367 4659 4696 4720 4730   NA
[3,] 3871 5345 5338 5420   NA   NA
[4,] 4239 5917 6020   NA   NA   NA
[5,] 4929 6794   NA   NA   NA   NA
[6,] 5217   NA   NA   NA   NA   NA

ainsi que le triangle évoqué sur http://rworkingparty.wikidot.com/

> OthLiabData = read.csv("http://www.casact.org/research/reserve_data/othliab_pos.csv",header=TRUE, sep=",")
> library(ChainLadder)
> OL = SumData=ddply(OthLiabData,.(AccidentYear,DevelopmentYear,DevelopmentLag),summarise,IncurLoss=sum(IncurLoss_h1-BulkLoss_h1),
+ CumPaidLoss=sum(CumPaidLoss_h1), EarnedPremDIR=sum(EarnedPremDIR_h1))
> LossTri = as.triangle(OL, origin="AccidentYear",
+ dev = "DevelopmentLag", value="IncurLoss")
> Year = as.triangle(OL, origin="AccidentYear",
+ dev = "DevelopmentLag", value="DevelopmentYear")
> TRIANGLE=LossTri
> TRIANGLE[Year>1997]=NA
> TRIANGLE
      dev
origin      1      2      3      4      5      6      7      8      9     10
  1988 128747 195938 241180 283447 297402 308815 314126 317027 319135 319559
  1989 135147 208767 270979 304488 330066 339871 344742 347800 353245     NA
  1990 152400 238665 297495 348826 359413 364865 372436 372163     NA     NA
  1991 151812 266245 357430 400405 423172 442329 460713     NA     NA     NA
  1992 163737 269170 347469 381251 424810 451221     NA     NA     NA     NA
  1993 187756 358573 431410 476674 504667     NA     NA     NA     NA     NA
  1994 210590 351270 486947 581599     NA     NA     NA     NA     NA     NA
  1995 213141 351363 444272     NA     NA     NA     NA     NA     NA     NA
  1996 237162 378987     NA     NA     NA     NA     NA     NA     NA     NA
  1997 220509     NA     NA     NA     NA     NA     NA     NA     NA     NA

Examen intra, éléments de correction

L’énoncé de l’examen intra est en pdf ici et comme annoncé par courriel, la correction de l’intra est dans le pdf en ligne. Comme personne ne semble en désaccord avec les réponses proposées, les notes seront mises en ligne très bientôt. Concertant les questions 18 et 19 quelques compléments d’explications (que je n’avais pas tapé dans le pdf). On avait vu que l’estimateur du maximum de vraisemblance pour une régression de Poisson était asymptotiquement Gaussien,

https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\beta}}_{P}\sim\mathcal{N}(\boldsymbol{\beta},V_\infty(\widehat{\boldsymbol{\beta}}_{P}))

(asymptotiquement) avec

https://latex.codecogs.com/gif.latex?V_\infty(\widehat{\boldsymbol{\beta}}_{P})=\left(\sum_{i=1}^n%20\widehat%20Y_i%20\boldsymbol{X}_i\boldsymbol{X}_i%27\right)^{-1}

Quand on a une régression de type binomiale négative, si on note de manière très générale https://latex.codecogs.com/gif.latex?\omega_i=\text{Var}(Y_i|\boldsymbol{X}_i) (on avait vu en cours qu’il existait plusieurs spécifications possibles pour cette variance conditionnelle). Dans ce cas,

https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\beta}}_{BN}\sim\mathcal{N}(\boldsymbol{\beta},V_\infty(\widehat{\boldsymbol{\beta}}_{BN}))

avec

https://latex.codecogs.com/gif.latex?V_\infty(\widehat{\boldsymbol{\beta}}_{P})=\left(\sum_{i=1}^n%20\widehat%20Y_i%20\boldsymbol{X}_i\boldsymbol{X}_i%27\right)^{-1}\left[\sum_{i=1}^n%20\omega_i%20\boldsymbol{X}_i\boldsymbol{X}_i\right]\left(\sum_{i=1}^n%20\widehat%20Y_i%20\boldsymbol{X}_i\boldsymbol{X}_i%27\right)^{-1}

Bref, tout dépend fondamentalement de la spécification de la variance conditionnelle. Sous R, c’est la régression binomiale négative de type 1 qui est considérée, i.e.

https://latex.codecogs.com/gif.latex?\omega_i=\text{Var}(Y_i|\boldsymbol{X}_i)=\phi\cdot%20\mathbb{E}(Y_i|\boldsymbol{X}_i)=\phi%20\cdot%20\widehat{Y}_i

On toujours une relation de la forme

https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\beta}}_{QP}\sim\mathcal{N}(\boldsymbol{\beta},V_\infty(\widehat{\boldsymbol{\beta}}_{QP}))

avec (en simplifiant un peu)

https://latex.codecogs.com/gif.latex?V_\infty(\widehat{\boldsymbol{\beta}}_{QP})=\phi\cdot\left(\sum_{i=1}^n%20\widehat%20Y_i%20\boldsymbol{X}_i\boldsymbol{X}_i%27\right)^{-1}

aussi, on a

https://latex.codecogs.com/gif.latex?V_\infty(\widehat{\boldsymbol{\beta}}_{QP})=\phi\cdot%20V_\infty(\widehat{\boldsymbol{\beta}}_{P})

Mais comme annoncé en cours, des points étaient données pour ceux qui se contentaient d’affirmer que la variance des estimateurs était plus grande s’il y avait sur-dispersion.

Triangles et provisionnement

La première partie du cours sur le provisionnement (calcul des provisions pour sinistres à payer) aura lieu dans 10 jours. Les transparents sont en ligne ici, et portent sur la construction des triangles de paiements. La méthode chain ladder (et la formalisation proposée par Thomas Mack) ainsi que les extensions seront présentées. La seconde partie portera sur les méthodes basées sur la régression de Poisson.

Multiple (smoothed) regression and portfolio exposure

Wednesday, in class, we’ve seen how to visualize a multiple regression model (with two continuous explanatory variables). Here, the goal is to predict the average cost of an insurance claim, using some covariates, e.g. the age of the driver, and the age of the car (recall that losses here are liability losses). The prediction obtained from a (standard) generalized linear model, with a log-link

> reg1=glm(cout~ageconducteur+agevehicule,data=base,family=Gamma(link="log"))

The code to visualize the predicted average cost is the following: first, we have to compute predictions for specific values,

> pred=function(x,y){
+ predict(reg,newdata=data.frame(ageconducteur=x,
+ agevehicule=y),type="response")

Then, we use this function to compute values on a grid,

> X=seq(20,80,by=5)
> Y=0:20
> Z=outer(X,Y,p)
> image(X,Y,Z,col=rev(heat.colors(101)))
> contour(X,Y,Z,add=TRUE,
+ levels=c(1400,1800,2000,2200,2400,2600,2800,3000,3200,4000,5000))

If we use factors, and not continuous variates (cut versions of those two variates),

> reg2=glm(cout~cut(ageconducteur,breaks=c(0,22,35,55,80,100))*
+               cut(agevehicule,breaks=c(-1,1,3,5,10,100)),
+ data=base,family=Gamma(link="log"))

(note that we consider the Cartesian product, so values are computed for each product of factors, age of the driver and age of the car) we obtain

Obviously, we’re missing something here: the most expensive class with one model is the cheapeast for the other one! Of course, it might come from our classes (that were chosen a bit randomly), but it might be interesting to use nonlinear functions of the ages. So, let us use splines to smooth those two variables,

> reg3=glm(cout~bs(ageconducteur)+bs(agevehicule),data=base,
+ family=Gamma(link="log"))

With additive smoothed functions, we obtained a symmetric graph (due to the additive property)

while with a bivariate spline

> library(mgcv)
+ reg4=gam(cout~s(ageconducteur,agevehicule),data=base,
+ family=Gamma(link="log"))

(for some odd reasons, I could not use – easily – a bivariate spline in the Generalized Linear Model, but it did work considering a Generalized Additive Model – which is, by no means additive now). We can identify here some regions where the average cost can be extremely expensive… But, as mentioned wednesday, one should keep in mind that some parts of the square above are not reached. More precisely, the distribution of the portfolio, as a function of those two covariates is the following

Thus, the proportion of young drivers driving a brand new car, and the proportion of old drivers driving a very old car is rather small… If the goal is to find niches, one should look at the prediction more carefully, but if the goal is to make that everyone gets an insurance cover, maybe we should allow that some drivers are under-priced (especially when are rare in the portfolio). And one should keep in mind that average costs are extremely sensitive to large losses, as discussed previously http://freakonometrics.hypotheses.org/3490 (and in class)

In the univariate case, I have migrated an old post, we I tried to reproduce (in R and in French) some standard graphs in the insurance industry: it is always interesting to visualize not only the prediction obtained from our models, but also the size of each class in the portfolio,

The post is online here http://freakonometrics.hypotheses.org/1224

Readings on IBNR and claims reserving

The second part of the course on nonlife insurance will be dedicated to IBNR and claims reserving techniques. The main reference is the textbook by Mario Wüthrich and Michael Merz (a preliminary version can be downloaded from http://actuaries.ch/…)

The first reference is Best Estimates for Reserves by by Glen Barnett and Ben Zehnwirth is available online http://casact.org/pubs/…. In 2004 , Ben Zehnwirth, Julie Sims and Mark Shapland published Will Your Next Reserve Increase Be Your Last, available on http://contingencies.org/janfeb04/…. Finally, on simulation based techniques, The Actuary published an article entitled about the bootstrap, http://insureware.com/Library/… For further readings, here are some articles, found in the CAS forums, the ASTIN conferences, etc,

Données pour la régression logistique, et de Poisson

Pour le cours de mercredi, deux petites bases, pour se pratiquer à modéliser des variable 0/1 ou une variable de comptage,

> base = read.table("http://freakonometrics.free.fr/base-glm-act2040.txt",
+ header=TRUE)

ou encore

> base = read.table("http://freakonometrics.free.fr/base-pratique-act2040.txt",
+ header=TRUE)

Sinon, une base plus complète pour faire de la tarification,

> BASEN=read.table("http://freakonometrics.free.fr/baseN.txt",header=TRUE,sep=";")
> BASEY=read.table("http://freakonometrics.free.fr/baseY.txt",header=TRUE,sep=";")
> head(BASEN)
ageconducteur agepermis sexeconducteur situationfamiliale  habitation zone
1            57        39              F             Celiba peri-urbain    8
2            54        35              H             Celiba      urbain    3
3            51        32              F             Celiba      urbain    1
4            53        35              H              Marie       rural    4
5            61        43              H              Marie      urbain    8
6            60        29              F              Marie peri-urbain    1
agevehicule proprietaire    payment  marque         poids     usage
1          12    locataire     Annuel  AUTRES     8.>3500kg PROMENADE
2          20     sans mrp Semestriel PEUGEOT 4.3100-3199kg PROMENADE
3           4     sans mrp     Annuel  RAPIDO     1.<2700kg PROMENADE
4           1     sans mrp     Annuel  AUTRES 3.3000-3099kg PROMENADE
5           1 proprietaire     Annuel    FIAT 6.3300-3399kg PROMENADE
6          10     sans mrp    Mensuel    FIAT     8.>3500kg PROMENADE
exposition nombre   voiture
1          1      0 Monospace
2          1      0   Berline
3          1      0  sans avp
4          1      0  sans avp
5          1      1 Monospace
6          1      0  sans avp

Parmi les variables, la description (sommaire) est la suivante,

  • ageconducteur: âge du conducteur principal du véhicule
  • agepermis: ancienneté du permis de conduire du conducteur principal du véhicule
  • sexeconducteur: sexe du conducteur principal (H ou F)
  • situationfamiliale: situation familiale du conducteur principal (“Celiba”, “Marie” ou “Veuf/Div”)
  • habitation: zone d’habitation du conducteur principal (“peri-urbain”, “rural” ou “urbain” )
  • zone: zone d’habitation (allant de 1 à 8)
  • agevehicule: age du véhicule
  • proprietaire: si le conducteur principal possède un contrat Habitation, son statut (“locataire” ou “proprietaire”)  Sinon “sans mrp”
  • payment:type de fractionnement de la prime d’assurance automobile (“Annuel”, “Mensuel” ou “Semestriel”)
  • marque: marque du véhicule
> levels(BASEN[,10])
[1] "ADRIA"       "AUTOSTAR"    "AUTRES"      "BURSTNER MOBIL"
[5] "CHALLENGER"  "CHAUSSON"    "CITROEN"     "FIAT"
[9] "FORD"        "HYMERMOBIL"  "MERCEDES"    "PEUGEOT"
[13] "PILOTE"     "RAPIDO"      "RENAULT"     "VOLKSWAGEN"
  • poids: classe de poids du véhicule
> levels(BASEN[,11])
[1] "1.<2700kg"    "2.2700-2999kg""3.3000-3099kg""4.3100-3199kg"
[5] "5.3200-3299kg""6.3300-3399kg""7.3400-3499kg""8.>3500kg"
  • usage: utilisation du véhicule principal (“PROMENADE” ou “TOUS_DEPLACEMENTS”)
  • exposition: exposition, en années
  • nombre: nombre d’accident responsabilité civile du conducteur principal, pendant l’année passée
  • cout: cout du sinistre
  • voiture: type de véhicule
> levels(BASEN[,15])
[1] "Berline"            "Break"              "Buggy"
[4] "Cabriolet"          "Combispace"         "Coup\xe9"
[7] "Coup\xe9 Cabriolet" "Jeep"               "Minibus"
[10] "Minispace"          "Monospace"         "sans avp"

La variable d’intérêt est ici le nombre d’accident,

> table(BASEN$nombre)

    0     1 
60155  3264

La base est un peu particulière – on en parlera en classe – les assurés ayant eu 0 ou 1 accident dans l’année.

Semaine de relâche et données de comptage

Comme annoncé en cours (pour ceux qui souhaitent profiter de la semaine de relâche pour se préparer) une partie de l’examen intra portera sur la base

> base=read.table("http://freakonometrics.free.fr/baseaffairs.txt",header=TRUE)
> tail(base)
    SEX AGE YEARMARRIAGE CHILDREN RELIGIOUS EDUCATION OCCUPATION SATISFACTION Y
596   1  47         15.0        1         3        16          4            2 7
597   1  22          1.5        1         1        12          2            5 1
598   0  32         10.0        1         2        18          5            4 6
599   1  32         10.0        1         2        17          6            5 2
600   1  22          7.0        1         3        18          6            2 2
601   0  32         15.0        1         3        14          1            5 1

Il s’agit d’une base construite à partir des données de l’article A Theory of Extramarital Affairs, de Ray Fair, paru en 1978 dabs Journal of Political Economy. La variable d’intérêt est (comme son nom l’indique) Y, le nombre d’aventures extra-conjugales pendant l’année passée, avec plusieurs variables explicatives

  • sex: 0 pour une femme, et 1 pour un homme
  • age: âge de la personne interrogée
  • yearmarriage: nombre d’années de mariage
  • children: 0 si la personne n’a pas d’enfants (avec son épouse) et 1 si elle en a
  • religious: degré de “religiosité”, entre 1 (anti-religieuse) à 5 (très religieuse)
  • education: nombre d’aéées d’éducation, 9=grade school, 12=high school, …, à 20=PhD
  • occupation: construit suivant l’échelle d’Hollingshead (cf http://cba.uah.edu/berkowd/….)
    • Higher executives of large concerns, proprietors, and major professionals (1)
    • Business managers, proprietors of medium-sized businesses, and lesser professionals (2)
    • Administrative personnel, owners of small businesses, and minor professionals (3)
    • Clerical and sales workers, technicians, and owners of little businesses (4)
    • Skilled manual employees (5)
    • Machine operators and semiskilled employees (6)
    • Unskilled employees (7)
  • satisfaction: perception de son mariage, de très mécontente (1) à très contente (5)

Je ne répondrais pas, a priori, aux questions portant sur ces données. Bon courage, et bonne semaine de relâche.

Further readings on GLMs and ratemaking

Some articles found in Actuarial journal, on ratemarking,

and in the CAS forums, and Astin conference papers

The law of small numbers

In insurance, the law of large numbers (named loi des grands nombres initially by Siméon Poisson, see e.g. http://en.wikipedia.org/…) is usually mentioned to legitimate large portfolios, because of pooling and diversification: the larger the pool, the more ‘predictable’ the losses will be (in a given period). Of course, under standard statistical assumption, namely finite expected value, and independence (see http://freakonometrics.blog.free.fr/…. for a discussion, in French). Since in insurance, catastrophes are usually rare – and extremely costly – and actuaries might be interested to model occurrence of that small number of events (see e.g. Aldous’ book on that specific topic, that can be downloaded from http://stat.berkeley.edu/…). The theorem behind is sometimes called the law of small numbers (from the book published by Ladislaus Bortkiewicz, but we’ll get back to that story later on, see also Whitaker (1914) http://biomet.oxfordjournals.org/… or the book recently published by Michael Falk, Jürg Hüsler and Rolf-Dieter Reiss).

  • The Poisson distribution

The so-called Poisson distribution (see http://en.wikipedia.org/…) was introduced by Siméon Poisson in 1837 (in Recherches sur la Probabilité des Jugements en Matière Criminelle et en Matière Civile, Précédées des Règles Générales du Calcul des Probabilités, see http://gallica.bnf.fr/…). But it had been defined more than a century before, by Abraham De Moivre, in 1711, in De Mensura Sortis seu; de Probabilitate Eventuum in Ludis a Casu Fortuito Pendentibus (see e.g. the review in http://www.jstor.org/…). Let https://latex.codecogs.com/gif.latex?N denote a counting random variable, then it said to be Poisson distributed if there is https://latex.codecogs.com/gif.latex?\lambda\in(0,\infty) such that

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=k)=e^{-\lambda}\frac{\lambda^k}{k!},\forall%20k\in\mathbb{N}

De Moivre obtained that distribution from an approximation of the binomial distribution. Recall that the binomial distribution is a standard distribution in actuarial science, for instance to model the number of deaths among https://latex.codecogs.com/gif.latex?n insured. If individual death probabilities are identical, say https://latex.codecogs.com/gif.latex?p, and if deaths are independent events, then

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=k)=\binom{n}{k}p^k(1-p)^{n-k},\forall%20k\in\{0,1,\cdots,n\}
And if https://latex.codecogs.com/gif.latex?n\rightarrow\infty and  https://latex.codecogs.com/gif.latex?np\rightarrow%20\lambda, then

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=k)\rightarrow%20e^{-\lambda}\frac{\lambda^k}{k!}Again, this is an asymptotic theorem, which is valid when we have a lot of observations (https://latex.codecogs.com/gif.latex?n\rightarrow\infty), but also, the probability of occurrence should be extremely small (since https://latex.codecogs.com/gif.latex?p\sim\lambda/n), which is why to use the term small numbers. Siméon Poisson was not interested by mathematical approximations: his main point was to get a distribution with nice goodness of fit properties for the data he was working on. He wanted to get a better understanding of cours d’assises (jury panel, might be a valid translation of the French term). A jury consists of 12 jurors who voted to determine whether a defendant was guilty. When guilt was predominant, with at least 8 votes against 4, the defendant was convicted (which was 47% of criminal cases). 5 with 7 votes against, the opinion of professional judges was requested (11% of criminal trials again). Using these statistics we can demonstrate that a defendant brought before an assize court is guilty of the order of 68%, and the probability that a juror is not wrong by voting (condemning an innocent or releasing a culprit) was about 54%. He sought to calculate the probability that a defendant is wrongfully convicted, and gets 2%. And 28% of exonerated defendants are in fact guilty. Siméon Poisson introduced this law to get probabilities easily. But the law he considered is central in probability….

  • The law of small numbers

The heuristic of the main theorem, related to the Poisson distribution is the following: let https://latex.codecogs.com/gif.latex?X_1,%20\cdots,X_n denote i.i.d random variables taking values in https://latex.codecogs.com/gif.latex?%20\mathbb{R}^d (in a general setting, one component can be the time, the other one an upper region of interest, where some stochastic process might be). Let https://latex.codecogs.com/gif.latex?\mathcal{A}_n\subset\mathbb{R}^d. If  https://latex.codecogs.com/gif.latex?\mathbb{P}(X_i%20\in%20\mathcal{A}_n)\rightarrow%200 as https://latex.codecogs.com/gif.latex?n\rightarrow\infty (or https://latex.codecogs.com/gif.latex?\mathbb{P}(X_i%20\in%20\mathcal{A}_n)=O(n^{-1}) to be a little bit more specific about the assumptions), let https://latex.codecogs.com/gif.latex?N denote the (random variable characterizing) count of events https://latex.codecogs.com/gif.latex?\{X_i%20\in%20\mathcal{A}_n\}, then https://latex.codecogs.com/gif.latex?N can be approximated by a Poisson distribution with parameter https://latex.codecogs.com/gif.latex?\lambda%20=n%20\times%20\mathbb%20P(X_i%20\in%20\mathcal{A}_n).
The heuristic is that if we consider a large number of observations, and if we count how many are in a given (small) region, then the number of such observations is Poisson distributed.

n=1000
X=runif(n)*10-1.5
Y=runif(n)*10-1.5
plot(X,Y,axis=FALSE,cex=.6)
u=seq(-1,1,by=.01)
v=sqrt(1-u^2)
polygon(c(u,rev(u)),c(v,rev(-v)),col="yellow",border=NA)
I=(X^2+Y^2)<1
points(X[I],Y[I],cex=.6,pch=19,col="red")

If we run some simulations,

>  n=1000
>  ns=100000
>  N=rep(NA,ns)
> for(s in 1:ns){
+ X=runif(n)*10-1.5
+ Y=runif(n)*10-1.5
+ I=(X^2+Y^2)<1
+ N[s]=sum(I)
+ }
> hist(N,breaks=0:60,probability=TRUE,col="yellow")
> mean(N)
[1] 31.41257

The parameter of the Poisson distribution is the area of the yellow disk, over the area of the square, i.e.

> (lambda=10*pi)
[1] 31.41593
> lines(0:60-.5,dpois(0:60,lambda),type="b",col="red")

http://freakonometrics.hypotheses.org/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-28-a%CC%80-11.14.21.png

To get an interpretation related to insurance modeling, let https://latex.codecogs.com/gif.latex?\mathcal{A} denote an upper layer in a reinsurance contract, i.e. https://latex.codecogs.com/gif.latex?\mathcal{A}=\{x%3Ed\} for some deductible https://latex.codecogs.com/gif.latex?d. Let https://latex.codecogs.com/gif.latex?X_i‘s denote individual losses. Then the number of claims that hit this upper layer can be modeled with a Poisson distribution. More precisely, if deductible https://latex.codecogs.com/gif.latex?d becomes extremely large (and https://latex.codecogs.com/gif.latex?\mathbb{P}(X_i%20\in%20\mathcal{A})\rightarrow%200), we obtain the point-over-threshold model in extreme value theory (see e.g. http://brale.math.hr/~iugrina/… or  http://fire.nist.gov/bfrlpubs/…): if https://latex.codecogs.com/gif.latex?N has a Poisson distribution and, conditionally on https://latex.codecogs.com/gif.latex?Nhttps://latex.codecogs.com/gif.latex?X_1,\cdots,X_N are independent identically distributed generalized Pareto random variables, then https://latex.codecogs.com/gif.latex?\max\{X_1,\cdots,X_N\} has the generalized extreme value distribution. Thus, exceedances models (for rare events) are closely related to Poisson processes.

  • The Poisson process

As mentioned above, the Poisson distribution appears when events occur somehow randomly and independently, over time. It is then natural to study the time between two occurences (or two claims, in an insurance context).

  • Poisson distribution, and claims occurrence

It is neither Siméon Poisson nor De Moivre, but Ladislaus Von Bortkiewicz who first mentioned the Poisson distribution as the law of small numbers. In 1898 (see https://archive.org/…), he studied the number number of soldiers killed by being kicked by a horse, from 1875 till 1894, in 200 corps (more precisely 10 corps over 20 ans).

He did obtain the following distribution (here, the parameter of the Poisson distribution is 0.61, i.e. the average number of death per year)

number of
death per
year
Empirical
counts
Poisson
distribution
0 109 108.67
1 65 66.21
2 22 20.22
3 3 4.11
4 1 0.63
5 and more 0 0.08

It is possible to find a lot of cases where the Poisson distribution fits extremely well. For instance, if we consider the number of hurricanes, that landed in Florida after 1850,

number of
hurricanes
per year
empirical
frequency
Poisson
frequency
0 30 27.16
1 48 47.99
2 37 42.41
3 29 24.98
4 8 11.03
5 3 3.90
6 3 1.15
7 1 0.29
8 and more 0 0.08
  • Poisson distribution, and return period

The return period was introduced by Emil Gumbel, in hydrology, to link probabilities and durations (see e.g. http://freakonometrics.blog.free.fr/…). A decennial event has an occurence probability of 1/10. 10 is then the average waiting time before occurence. This does not mean that the event will not occur before 10 years, or has to occur before 10 years. Consider a return period https://latex.codecogs.com/gif.latex?T (in years), then the yearly probability of non-occurrence is https://latex.codecogs.com/gif.latex?1-(1/T).

And the probability of non-occurence over https://latex.codecogs.com/gif.latex?n years is then https://latex.codecogs.com/gif.latex?1-[1-(1/T)]^n. It is standard to summarize this property with the following table,

return period https://latex.codecogs.com/gif.latex?T

Number of years (https://latex.codecogs.com/gif.latex?n) without catastrophes

10 20 50 100 200
10 65.1% 40.1% 18.3% 9.6% 4.9%
20 87.8% 64.2% 33.2% 18.2% 9.5%
50 99.5% 92.3% 63.6% 39.5% 22.5%
100 99.9% 99.4% 86.7% 63.4% 39.5%
200 99.9% 99.9% 98.2% 86.6% 63.3%

The diagonal in the table above is extremely interesting. It looks like there is some kind of convergence towards a limiting value (here  63.2%). Indeed, the number of events observed over n years have a Binomial distribution, with probability https://latex.codecogs.com/gif.latex?1/T=1/n, which will converge towards the Poisson distribution with parameter 1. The probability of not having a catastrophe is then https://latex.codecogs.com/gif.latex?1-\exp(-1), which is equal to 0.632.

  • Rare probabilities and the Poisson distribution

The Poisson distribution keeps appearing when computing probabilies of rare events. For instance, the probability to have at least one incident in a nuclear plant in France, over a 50 year period. Assume that the annual probability of an incident in a reactor https://latex.codecogs.com/gif.latex?p is small, e.g. 0.05%. Assume further that reactors are independent among them, and in time. The probability to have an incident over 80 reactors in 50 years is (exactly)

https://latex.codecogs.com/gif.latex?\mathbb{P}(N\neq0)=1-(1-p)^{50%20\times%2080}

Of course, a linear approximation is not correct (even if it was mentioned in some French newspaper, as explained in an old post http://freakonometrics.blog.free.fr/…)

https://latex.codecogs.com/gif.latex?\mathbb%20P(N\neq%200)\neq%2050\times%2080\times%20p

On the other hand

https://latex.codecogs.com/gif.latex?\mathbb%20P(N\neq 0)=1-(1-p)^{50\times80%20}%20\sim1-\exp\left(-50\times80\times%20p%20\right)

> p=0.00005
> 1-(1-p)^(50*80)
[1] 0.1812733
> 1-exp(-50*80*p)
[1] 0.1812692

which is the probability that https://latex.codecogs.com/gif.latex?N is null when https://latex.codecogs.com/gif.latex?N has a Poisson distribution with parameter https://latex.codecogs.com/gif.latex?\lambda=50\times80\times%20p. We clearly see here an application of De Moivre’s approximation in risk management.

Another way of looking at this problem is based on the following idea: given the fact that in 45 years of observations on 450 reactors worldwide (roughly), three major accidents were observed including Three Mile Island (1979) and Fukushima (2011), i.e. the average time between accidents can be estimated at 16 years. For a single reactor, we can assume that the average time to wait before an incident is 450 times 16 years, i.e 7200 years. Or the probability to have one incident, over one year, for one reactor is 1 over 7200 (this is the idea behind the return period concept). If we assume that the arrival of accidents occurs randomly and independently of each other (as defined above) then the number of major accidents observed over a period of 50 years in France follows a Poisson distribution with parameter 50 / (7200/80). Also, the probability of having no major accident over 50 years, with 80 reactors can be estimated by

https://latex.codecogs.com/gif.latex?1-\exp(-50\times%2080/7200)

i.e.

> 1-exp(-50*80/7200)
[1] 0.4262466

(keeping in mind all the uncertainty around the estimated waiting time before a major accident to a single reactor!).