Category Archives: GLM

Modèle de régression et interaction(s) entre facteurs

Dans un modèle de régression, on veut écrire

Quand on se limite à un modèle linéaire, on écrit

ou encore

Mais on de doute que l’on rate quelque chose… en particulier, on va rater toutes les interactions possibles. On peut croiser les variables, et supposer que

qui peut s’étendre d’avantage, à l’ordre 3,

voire davantage.

Supposons que nos variables  soient ici qualitatives, et plus précisément binaires. Prenons un exemple simple, avec des données (classiques) en risque de crédit1. On peut trouver la base via

library(evtree)
db=GermanCredit

ou encore directement

myVariableNames = c("checking_status","duration","credit_history",
"purpose","credit_amount","savings","employment","installment_rate",
"personal_status","other_parties","residence_since","property_magnitude",
"age","other_payment_plans","housing","existing_credits","job",
"num_dependents","telephone","foreign_worker","class")

GermanCredit = read.table(
"http://archive.ics.uci.edu/ml/machine-learning-databases/statlog/german/german.data",
header=FALSE,col.names=myVariableNames)

Retenons pour commencer trois variables explicatives,

db=data.frame(Y=GermanCredit$class-1,
X1=GermanCredit$checking_status%in%c("A12","A13"),
X2=GermanCredit$credit_history%in%c("A30","A31"),
X3=GermanCredit$savings%in%c("A61","A62"))
reg=glm(Y~X1+X2+X3,data=db,family=binomial)
summary(reg)

La régression sans interaction donne ici

Call:
glm(formula = Y ~ X1 + X2 + X3, family = binomial, data = db)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.5431  -0.8421  -0.6295   1.3994   1.9999  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept)  -1.8544     0.1699 -10.915  < 2e-16 ***
X1TRUE        0.3363     0.1496   2.249   0.0245 *  
X2TRUE        1.3462     0.2347   5.735 9.76e-09 ***
X3TRUE        1.0001     0.1787   5.596 2.19e-08 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 1221.7  on 999  degrees of freedom
Residual deviance: 1143.6  on 996  degrees of freedom
AIC: 1151.6

Number of Fisher Scoring iterations: 4

Il existe plusieurs interactions possibles ici (limitons nous aux paires). C’est ce que l’on observe quand on fait la régression

reg=glm(Y~X1+X2+X3+X1:X2+X1:X3+X2:X3,data=db,family=binomial)
summary(reg)

Call:
glm(formula = Y ~ X1 + X2 + X3 + X1:X2 + X1:X3 + X2:X3, family = binomial, 
    data = db)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.5369  -0.8281  -0.6439   1.3954   1.9638  

Coefficients:
              Estimate Std. Error z value Pr(>|z|)    
(Intercept)   -1.77109    0.20070  -8.825  < 2e-16 ***
X1TRUE         0.30296    0.33737   0.898 0.369186    
X2TRUE         0.88353    0.54255   1.628 0.103421    
X3TRUE         0.87709    0.22583   3.884 0.000103 ***
X1TRUE:X2TRUE -0.37917    0.49343  -0.768 0.442225    
X1TRUE:X3TRUE  0.09178    0.37278   0.246 0.805522    
X2TRUE:X3TRUE  0.80923    0.58185   1.391 0.164293    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 1221.7  on 999  degrees of freedom
Residual deviance: 1141.0  on 993  degrees of freedom
AIC: 1155

Number of Fisher Scoring iterations: 4

On peut faire un dessin pour visualiser les interactions : on a trois sommets (nos trois variables), et on visualiser les interactions

indices=cbind(c(1,2,3),c(1,1,2),c(2,3,3))
k=3
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",
xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

ce qui donne ici, pour nos trois variables

Ce modèle pourrait sembler incomplet, car on ne regarde que les interactions entre les modalités, par paires. En fait, c’est parce qu’il manque (visuellement) les variables non-croisées. On peut les rajouter si on veut (au risque de surcharger le dessin)

cercle=function(c,r,cl) lines(c[1]+r*cos(seq(0,2*pi,length=501)),
c[2]+r*sin(seq(0,2*pi,length=501)),col=cl)

reg=glm(Y~X1+X2+X3+X1:X2+X1:X3+X2:X3,data=db,family=binomial)
indices=cbind(c(1,2,3),c(1,1,2),c(2,3,3))
k=3
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}
for(i in 1:k){
cercle(c(cos(theta)[i]*1.18,sin(theta)[i]*1.18),.18,"grey")
text(cos(theta)[i]*1.35,sin(theta)[i]*1.35,
trunc(10000*coefficients(reg)[1+i])/10000)
}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

soit ici

Si on change le ‘sens‘ de nos variables (en recodant a l’envers, en permutant les vrais et les faux), on obtient le graphique suivant

dbinv=db
dbinv[,2:k]=1-dbinv[,2:k]
reg=glm(Y~X1+X2+X3+X1:X2+X1:X3+X2:X3,data=dbinv,family=binomial)
indices=cbind(c(1,2,3),c(1,1,2),c(2,3,3))
k=3
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}
for(i in 1:k){
cercle(c(cos(theta)[i]*1.18,sin(theta)[i]*1.18),.18,"grey")
text(cos(theta)[i]*1.35,sin(theta)[i]*1.35,
trunc(10000*coefficients(reg)[1+i])/10000)
}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

qui peut alors être comparé au graphique précédant

Avec 5 variables, on augmente les interactions possibles… même si beaucoup risquent d’être non-significatifs. On peut déjà se focaliser sur les paires possibles d’interactions croisées. Pour simplifier le code, on va utiliser deux fonctions locales,

vrepeach=function(x,e){
v=NULL
for(i in 1:length(e)){v=c(v,rep(x[i],each=e[i]))}
return(v)}
vreplength=function(x,l){
v=NULL
for(i in 1:length(l)){v=c(v,x[l[i]:length(x)])}
return(v)}

et ensuite, on adapte le code précédant

indices=cbind(1:(k*(k-1)/2),vrepeach(1:(k-1),(k-1):1),vreplength(2:k,1:(k-1)))
formule="Y~1"
for(i in 1:k) formule=paste(formule,"+X",i,sep="")
for(i in 1:nrow(indices)) formule=paste(formule,"+X",indices[i,2],":X",indices[i,3],sep="")
reg=glm(formule,data=db,family=binomial)
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}
for(i in 1:k){
cercle(c(cos(theta)[i]*1.18,sin(theta)[i]*1.18),.18,"grey")
text(cos(theta)[i]*1.35,sin(theta)[i]*1.35,
trunc(10000*coefficients(reg)[1+i])/10000)
}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

ce qui donne un schéma plus complexe,

On peut aussi prendre juste 2 variables, prenant 3 et 4 modalités respectivement. On va extraire deux variables indicatrices pour la première (la modalité restante sera la modalité de référence) et trois pour la seconde,

db=data.frame(Y=GermanCredit$class-1,
X1=GermanCredit$checking_status=="A12",
X2=GermanCredit$checking_status=="A13",
X3=GermanCredit$checking_status=="A14",
X4=GermanCredit$employment%in%c("A72","A73"),
X5=GermanCredit$employment%in%c("A74","A75"))
k=5
indices=cbind(1:(k*(k-1)/2),vrepeach(1:(k-1),(k-1):1),vreplength(2:k,1:(k-1)))
formule="Y~1"
for(i in 1:k) formule=paste(formule,"+X",i,sep="")
for(i in 1:nrow(indices)) formule=paste(formule,"+X",indices[i,2],":X",indices[i,3],sep="")
reg=glm(formule,data=db,family=binomial)
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
if(!is.na(coefficients(reg)[1+k+i])){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}}
for(i in 1:k){
cercle(c(cos(theta)[i]*1.18,sin(theta)[i]*1.18),.18,"grey")
text(cos(theta)[i]*1.35,sin(theta)[i]*1.35,
trunc(10000*coefficients(reg)[1+i])/10000)
}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

On voit que plusieurs interactions ne sont alors plus possibles, sur la partie gauche (les trois modalités de la même variable) et sur la partie droite

On peut d’ailleurs simplifier les graphs, en ne visualisant que les interactions significatives.

indices=cbind(1:(k*(k-1)/2),vrepeach(1:(k-1),(k-1):1),vreplength(2:k,1:(k-1)))
formule="Y~1"
for(i in 1:k) formule=paste(formule,"+X",i,sep="")
for(i in 1:nrow(indices)) formule=paste(formule,"+X",indices[i,2],":X",indices[i,3],sep="")
reg=glm(formule,data=db,family=binomial)
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
if(!is.na(coefficients(reg)[1+k+i])){
if(summary(reg)$coefficients[1+k+i,4]<.1){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}}}
for(i in 1:k){
if(summary(reg)$coefficients[1+i]<.1){
cercle(c(cos(theta)[i]*1.18,sin(theta)[i]*1.18),.18,"grey")
text(cos(theta)[i]*1.35,sin(theta)[i]*1.35,
trunc(10000*coefficients(reg)[1+i])/10000)
}}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

soit ici

Ici, une seule interactions croisée est significative, et presque toutes les variables le sont. Et si on reprend le modèle avec 5 facteurs,

db=data.frame(Y=GermanCredit$class-1,X1=GermanCredit$checking_status%in%c("A12","A13"),
X2=GermanCredit$credit_history%in%c("A30","A31"),
X3=GermanCredit$savings%in%c("A61","A62"),
X4=GermanCredit$employment%in%c("A71","A72"),
X5=GermanCredit$other_payment_plans=="A143")

indices=cbind(1:(k*(k-1)/2),vrepeach(1:(k-1),(k-1):1),vreplength(2:k,1:(k-1)))
formule="Y~1"
for(i in 1:k) formule=paste(formule,"+X",i,sep="")
for(i in 1:nrow(indices)) formule=paste(formule,"+X",indices[i,2],":X",indices[i,3],sep="")
reg=glm(formule,data=db,family=binomial)
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
if(!is.na(coefficients(reg)[1+k+i])){
if(summary(reg)$coefficients[1+k+i,4]<.1){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}}}
for(i in 1:k){
if(summary(reg)$coefficients[1+i]<.1){
cercle(c(cos(theta)[i]*1.18,sin(theta)[i]*1.18),.18,"grey")
text(cos(theta)[i]*1.35,sin(theta)[i]*1.35,
trunc(10000*coefficients(reg)[1+i])/10000)
}}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

on obtient

Je ne sais pas si mes graphiques sont pertinents, ou pas. Mais je trouve ça joli. En fait, je suis tombé un peu par hasard2 sur les Tables de Taguchi, développées par Gen’ichi Taguchi (田口 玄一). Le soucis est que je n’ai rien compris… Enfin, disons que je croyais comprendre, puis j’ai continué à faire des dessins… Si quelqu’un pourrait m’expliquer sur mon exemple les graphiques de Taguchi, je suis preneur ! car je doute que ce soit ce que je fais depuis tout à l’heure…

1. Cette base est largement utilisée dans le quatrième chapitre de Computational Actuarial Science with R, à paraître dans les mois à venir.

2.En l’occurence, le hasard est @Benavent qui a suscité ma curiosité ce matin en me parlant de ces tables, dont je n’avais alors jamais entendu parlé ! J’avais même lu rapidement Taniguchi (谷口 ジロー) et je ne voyais pas le rapport avec les statistiques….

SOA Webinar on Predictive Modeling

I will give, with Qichun Xu, a joint webinar for the Reinsurance Council and the Futurism Council of the Society of Actuaries, on Perspectives of Predictive Modeling with Case Studies in a few days. The slides of my talk are now available (I do recommand to open the pdf version of the slides with Acrobat, since there are animated pictures in the slides that could not be visualized below for instance). The Society of Actuaries asked specifically for a powerpoint document, so I will use screenshots of the slides for the webinar. I do encourage to open and read the pdf file for a better quality… Sorry for the inconvenience. I will upload soon lines of codes to reproduce most of the graphs. All comments and remarks are welcome.

More significant? so what…

Following my non-life insurance class, this morning, I had an interesting question from a student, that I will try to illustrate, and reformulate as accurately as possible. Consider a simple regression model, with one variable of interest, and one possible explanatory variable. Assume that we have two possible models, with the following output (yes, I do hide interesting parts here, but it is to get quickly to my student’s point)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92883    0.06391  14.534   <2e-16 ***
X           -0.12499    0.06108  -2.046   0.0421 *  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

for the first model – a GLM with some distribution, and some link function – and

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92901    0.06270  14.817   <2e-16 ***
X           -0.09883    0.05816  -1.699   0.0909 .  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

for the second one – with another GLM, with another distribution, but the same link function (I guess I could have changed it, but it does not really matter here). Then, I got the following statement “I would like to choose the first model because the explanatory variable is more significant, and therefore, this model should have a stronger predictive power“.

That’s a nice idea, isn’t it ? Actually, I guess this is why I love teaching, because I will never be able to think about such an idea by myself. Because when you look at that statement, somehow it could make sense. Except that from my point of view, it is not valid at all. My first thought was to recall is standard example in statistical inference : you cannot not claim that a distribution is better than another one just by looking at the parameter estimates.

> fitdistr(Y,"normal")
      mean          sd    
  0.93685011   0.90700830 
 (0.06413517) (0.04535042)
> fitdistr(Y,"exponential")
      rate   
  1.06740661 
 (0.07547704)

Can I claim that the Gaussian distribution is better than the exponential one because parameter estimates have smaller standard deviation ? Because somehow, this is what we did when we claimed previously that the first model was better than the second one.

Let me get back on the outputs of the two regressions, and let me explain what I did. Actually, I wanted to have a story close to the one on the Gaussian versus exponential fit. So I did generate some exponential random variable,

> set.seed(5)
> n=200
> U=runif(n); 
> Y=-log(U)

Here, we can visualize the histogram of this sample, as well as the the estimated exponential distribution

> hist(Y,proba=TRUE,col="light green",border="white",lwd=2,breaks=seq(0,5.3333333333333,by=.333333333))
> x=seq(0,6,by=.02)
> lines(x,dexp(x,1/mean(Y)),col="red",lty=2)

On top of that, let us fit a gamma distribution. Using a GLM (where the regression is here on a constant – only), just to practice because later on, we will use a gamma regression on that variable

> reg0=glm(Y~1,family=Gamma(link="identity"))
> a=reg0$coefficient
> b=summary(reg0)$dispersion
> lines(x,dgamma(x,shape=1/b,scale=a*b),col="blue")

Now, we need a covariate, to run some regressions. What I wanted is some variable slightly correlated with our previous variable. Slightly, just to make sure that our -value in the regression will be close to 5% or 10%. So here, I did generate a variable so that the pair has Clayton copula, with coefficient 0.1 (which is small, extremely small)

> a=.1
> set.seed(5)
> n=200
> U=runif(n); 
> V=(U^(-a)*(runif(n)^(-a/(1+a))-1)+1)^(-1/a)
> Y=-log(U)
> X=qnorm(V)

To visualize the copula of the variables, we can use

> cop=function(u,v){
+ (a+1)*(u*v)^(-(a+1))*
+ (u^(-a)+v^(-a)-1)^(-(2*a+1)/a) }
> x=y=seq(.05,.95,by=.05)
> z=outer(x,y,cop)
> mat=persp(x,y,z,col="green",shade=TRUE,xlim=c(0,1),ylim=c(0,1),zlim=c(0,2),theta=-30,
+ ticktype ="detailed",zlab="")

We should be not far away from the independence (actually, there is a negative – significant – correlation (Pearson’s correlation)). Now, consider two models,

  • a Gaussian model (here a standard linear model)
  • a gamma model, with a linear link function

The outputs are the following (you will recognize the outputs given previously)

> reg1=lm(Y~X)
> reg2=glm(Y~X,family=Gamma(link="identity"))
> summary(reg1)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92883    0.06391  14.534   <2e-16 ***
X           -0.12499    0.06108  -2.046   0.0421 *  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.9021 on 198 degrees of freedom
Multiple R-squared:  0.02071,	Adjusted R-squared:  0.01576 
F-statistic: 4.187 on 1 and 198 DF,  p-value: 0.04206

> summary(reg2)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92901    0.06270  14.817   <2e-16 ***
X           -0.09883    0.05816  -1.699   0.0909 .  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for Gamma family taken to be 0.9086447)

    Null deviance: 229.72  on 199  degrees of freedom
Residual deviance: 226.58  on 198  degrees of freedom
AIC: 379.22

Number of Fisher Scoring iterations: 10

And here are the two predictions,

So, which model should we use? As usual, my answer will be “let’s have a look at the data” instead of looking only at tables of figures. Using some code posted a few days ago, let us visualize the two regressions. The Gaussian model is here

(for the lower part, I do not go below 0 since we do have, here, a positive variable that we would like to model) while the gamma on is here

And if we believe that the explanatory variable has no predictive power (since we can claim that the parameter is not significant in the regression), and we remove it from the regression, we get

Here, I do believe that the gamma (not to say the exponential) model is better because it is clearly more coherent with properties of the variable of interest. I trust more the confidence interval obtained above on the gamma model, than the one obtained with a Gaussian distribution. Even if the parameter in the regression is “more significant”.

GLM, non-linearity and heteroscedasticity

Last week in the non-life insurance course, we’ve seen the theory of the Generalized Linear Models, emphasizing the two important components

  • the link function (which is actually the key component in predictive modeling)
  • the distribution, or the variance function

Just to illustrate, consider my favorite dataset

­lin.mod = lm(dist~speed,data=cars)

A linear model means here Y_i=\beta_0+\beta_1X_i+\varepsilon_i

where the residuals are assumed to be centered, independent, and with identical variance. If we visualize that linear regression, we usually see something like that

The idea here (in GLMs) is to assume Y\vertX=x\sim\mathcal{N}(\beta_0+\beta_1x,\sigma^2)

which will produce the same model as the one describe previously, based on some error term. That model can be visualized below,

attach(cars)
n=2
X= cars$speed 
Y=cars$dist
df=data.frame(X,Y)
vX=seq(min(X)-2,max(X)+2,length=n)
vY=seq(min(Y)-15,max(Y)+15,length=n)
mat=persp(vX,vY,matrix(0,n,n),zlim=c(0,.1),theta=-30,ticktype ="detailed", box = FALSE)
reggig=glm(Y~X,data=df,family=gaussian(link="identity"))
x=seq(min(X),max(X),length=501)
C=trans3d(x,predict(reggig,newdata=data.frame(X=x),type="response"),rep(0,length(x)),mat)
lines(C,lwd=2)
sdgig=sqrt(summary(reggig)$dispersion)
x=seq(min(X),max(X),length=501)
y1=qnorm(.95,predict(reggig,newdata=data.frame(X=x),type="response"), sdgig)
C=trans3d(x,y1,rep(0,length(x)),mat)
lines(C,lty=2)
y2=qnorm(.05,predict(reggig,newdata=data.frame(X=x),type="response"), sdgig)
C=trans3d(x,y2,rep(0,length(x)),mat)
lines(C,lty=2)
C=trans3d(c(x,rev(x)),c(y1,rev(y2)),rep(0,2*length(x)),mat)
polygon(C,border=NA,col="yellow")
C=trans3d(X,Y,rep(0,length(X)),mat)
points(C,pch=19,col="red")
n=8
vX=seq(min(X),max(X),length=n)
mgig=predict(reggig,newdata=data.frame(X=vX))
sdgig=sqrt(summary(reggig)$dispersion)
for(j in n:1){
stp=251
x=rep(vX[j],stp)
y=seq(min(min(Y)-15,qnorm(.05,predict(reggig,newdata=data.frame(X=vX[j]),type="response"), sdgig)),max(Y)+15,length=stp)
z0=rep(0,stp)
z=dnorm(y, mgig[j], sdgig)
C=trans3d(c(x,x),c(y,rev(y)),c(z,z0),mat)
polygon(C,border=NA,col="light blue",density=40)
C=trans3d(x,y,z0,mat)
lines(C,lty=2)
C=trans3d(x,y,z,mat)
lines(C,col="blue")}

We do have two parts here: the linear increase of the average, \mathbb{E}(Y\vert X=x)=\beta_0+\beta_1x and the constant variance of the normal distribution \text{Var}(Y\vert X=x)=\sigma^2.

On the other hand, if we assume a Poisson regression,

poisson.reg = glm(dist~speed,data=cars,family=poisson(link="log"))

we have something like

This time, two things have changed simultaneously: our model is no longer linear, it is an exponential one \mathbb{E}(Y\vert X=x)=e^{\beta_0+\beta_1x}, and the variance is also increasing with the explanatory variable \text{Var}(Y\vert X=x)=e^{\beta_0+\beta_1x}, since with a Poisson regression,
Y\vert X=x\sim\mathcal{P}(e^{\beta_0+\beta_1x})

If we adapt the previous code, we get

The problem is that we changed two things when we introduced the Poisson regression from the linear model. So let us look at what happens when we change the two components independently. First, we can change the link function, with a Gaussian model but this time a multiplicative model (with a logarithm link function)

gaussian.reg = glm(dist~speed,data=cars,family=gaussian(link="log"))

which is still, here, an homoscedasctic model, but this time non-linear. Or we can change the link function in the Poisson regression, to get a linear model, but heteroscedastic

poisson.lin = glm(dist~speed,data=cars,family=poisson(link="identity"))

So this is basically what GLMs are about….

Earthquake dynamics

I just upload on http://hal.archives-ouvertes.fr/hal-00871883 a joint paper entitled Modeling earthquake dynamics.

In this paper, we investigate questions arising in Parsons & Geist (2012). Pseudo causal models connecting magnitudes and waiting times are consider, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos & Karlis (2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are function of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year, or a decade.

ROC curves and classification

To get back to a question asked after the last course (still on non-life insurance), I will spend some time to discuss ROC curve construction, and interpretation. Consider the dataset we’ve been using last week,

> db = read.table("http://freakonometrics.free.fr/db.txt",header=TRUE,sep=";")
> attach(db)

The first step is to get a model. For instance, a logistic regression, where some factors were merged together,

> X3bis=rep(NA,length(X3))
> X3bis[X3%in%c("A","C","D")]="ACD"
> X3bis[X3%in%c("B","E")]="BE"
> db$X3bis=as.factor(X3bis)
> reg=glm(Y~X1+X2+X3bis,family=binomial,data=db)

From this model, we can predict a probability, not a  variable,

> S=predict(reg,type="response")

Let https://latex.codecogs.com/gif.latex?\widehat{S} denote this variable (actually, we can use the score, or the predicted probability, it will not change the construction of our ROC curve). What if we really want to predict a  variable. As we usually do in decision theory. The idea is to consider a threshold https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-04.png, so that

  • if https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-05.png, then  https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-02.png will be https://latex.codecogs.com/gif.latex?1, or “positive” (using a standard terminology)
  • si https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-06.png, then  https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-02.png will be https://latex.codecogs.com/gif.latex?0, or “negative

Then we derive a contingency table, or a confusion matrix

     observed value https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-01.png
predicted
value
https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-02.png
“positive“ “négative“
“positive“ TP FP
“négative“ FN TN

where TP are the so-called true positive, TN  the true negative, FP are the false positive (or type I error) and FN are the false negative (type II errors). We can get that contingency table for a given threshold https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-04.png

> roc.curve=function(s,print=FALSE){
+ Ps=(S>s)*1
+ FP=sum((Ps==1)*(Y==0))/sum(Y==0)
+ TP=sum((Ps==1)*(Y==1))/sum(Y==1)
+ if(print==TRUE){
+ print(table(Observed=Y,Predicted=Ps))
+ }
+ vect=c(FP,TP)
+ names(vect)=c("FPR","TPR")
+ return(vect)
+ }
> threshold = 0.5
> roc.curve(threshold,print=TRUE)
        Predicted
Observed   0   1
       0   5 231
       1  19 745
      FPR       TPR 
0.9788136 0.9751309

Here, we also compute the false positive rates, and the true positive rates,

  • TPR = TP / P = TP / (TP + FN) also called sentivity, defined as the rate of true positive: probability to be predicted positve, given that someone is positive (true positive rate)
  • FPR = FP / N = FP / (FP + TN) is the rate of false positive: probability to be predicted positve, given that someone is negative (false positive rate)

The ROC curve is then obtained using severall values for the threshold. For convenience, define

> ROC.curve=Vectorize(roc.curve)

First, we can plot https://latex.codecogs.com/gif.latex?(\widehat{S}_i,Y_i) (a standard predicted versus observed graph), and visualize true and false positive and negative, using simple colors

> I=(((S>threshold)&(Y==0))|((S<=threshold)&(Y==1)))
> plot(S,Y,col=c("red","blue")[I+1],pch=19,cex=.7,,xlab="",ylab="")
> abline(v=threshold,col="gray")

And for the ROC curve, simply use

> M.ROC=ROC.curve(seq(0,1,by=.01))
> plot(M.ROC[1,],M.ROC[2,],col="grey",lwd=2,type="l")

This is the ROC curve. Now, to see why it can be interesting, we need a second model. Consider for instance a classification tree

> library(tree)
> ctr <- tree(Y~X1+X2+X3bis,data=db)
> plot(ctr)
> text(ctr)

To plot the ROC curve, we just need to use the prediction obtained using this second model,

> S=predict(ctr)

All the code described above can be used. Again, we can plot https://latex.codecogs.com/gif.latex?(\widehat{S}_i,Y_i) (observe that we have 5 possible values for https://latex.codecogs.com/gif.latex?\widehat{S}_i, which makes sense since we do have 5 leaves on our tree). Then, we can plot the ROC curve,

An interesting idea can be to plot the two ROC curves on the same graph, in order to compare the two models

> plot(M.ROC[1,],M.ROC[2,],type="l")
> lines(M.ROC.tree[1,],M.ROC.tree[2,],type="l",col="grey",lwd=2)

The most difficult part is to get a proper interpretation. The tree is not predicting well in the lower part of the curve. This concerns people with a very high predicted probability. If our interest is more on those with a probability lower than 90%, then, we have to admit that the tree is doing a good job, since the ROC curve is always higher, comparer with the logistic regression.

Logistic regression and categorical covariates

A short post to get back – for my nonlife insurance course – on the interpretation of the output of a regression when there is a categorical covariate. Consider the following dataset

> db = read.table("http://freakonometrics.free.fr/db.txt",header=TRUE,sep=";")
> attach(db)
> tail(db)
     Y       X1       X2 X3
995  1 4.801836 20.82947  A
996  1 9.867854 24.39920  C
997  1 5.390730 21.25119  D
998  1 6.556160 20.79811  D
999  1 4.710276 21.15373  A
1000 1 6.631786 19.38083  A

Let us run a logistic regression on that dataset

> reg = glm(Y~X1+X2+X3,family=binomial,data=db)
> summary(reg)

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -4.45885    1.04646  -4.261 2.04e-05 ***
X1           0.51664    0.11178   4.622 3.80e-06 ***
X2           0.21008    0.07247   2.899 0.003745 ** 
X3B          1.74496    0.49952   3.493 0.000477 ***
X3C         -0.03470    0.35691  -0.097 0.922543    
X3D          0.08004    0.34916   0.229 0.818672    
X3E          2.21966    0.56475   3.930 8.48e-05 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 552.64  on 999  degrees of freedom
Residual deviance: 397.69  on 993  degrees of freedom
AIC: 411.69

Number of Fisher Scoring iterations: 7

Here, the reference is modality . Which means that for someone with characteristics , we predict the following probability

where  denotes the cumulative distribution function of the logistic distribution

For someone with characteristics , we predict the following probability

For someone with characteristics , we predict the following probability

(etc.) Here, if we accept  (against ), it means that modality  cannot be considerd as different from .

A natural idea can be to change the reference modality, and to look at the -values. If we consider the following loop, we get

> M = matrix(NA,5,5)
> rownames(M)=colnames(M)=LETTERS[1:5]
> for(k in 1:5){
+ db$X3 = relevel(X3,LETTERS[k])
+ reg = glm(Y~X1+X2+X3,family=binomial,data=db)
+ M[levels(db$X3)[-1],k] = summary(reg)$coefficients[4:7,4]
+ } 
> M
             A            B            C            D            E
A           NA 0.0004771853 9.225428e-01 0.8186723647 8.482647e-05
B 4.771853e-04           NA 4.841204e-04 0.0009474491 4.743636e-01
C 9.225428e-01 0.0004841204           NA 0.7506242347 9.194193e-05
D 8.186724e-01 0.0009474491 7.506242e-01           NA 1.730589e-04
E 8.482647e-05 0.4743636442 9.194193e-05 0.0001730589           NA

and if we simply want to know if the -value exceeds – or not – 5%, we get the following,

> M.TF = M>.05
> M.TF
      A     B     C     D     E
A    NA FALSE  TRUE  TRUE FALSE
B FALSE    NA FALSE FALSE  TRUE
C  TRUE FALSE    NA  TRUE FALSE
D  TRUE FALSE  TRUE    NA FALSE
E FALSE  TRUE FALSE FALSE    NA

The first column is obtained when  is the reference, and then, we see which parameter should be considered as null. The interpretation is the following:

  •  and  are not different from 
  •  is not different from 
  •  and  are not different from 
  •  and  are not different from 
  •  is not different from 

Note that we only have, here, some kind of intuition. So, let us run a more formal test. Let us consider the following regression (we remove the intercept to get a model easier to understand)

> library(car)
> db$X3=relevel(X3,"A")
> reg=glm(Y~0+X1+X2+X3,family=binomial,data=db)
> summary(reg)

Coefficients:
    Estimate Std. Error z value Pr(>|z|)    
X1   0.51664    0.11178   4.622 3.80e-06 ***
X2   0.21008    0.07247   2.899  0.00374 ** 
X3A -4.45885    1.04646  -4.261 2.04e-05 ***
X3E -2.23919    1.06666  -2.099  0.03580 *  
X3D -4.37881    1.04887  -4.175 2.98e-05 ***
X3C -4.49355    1.06266  -4.229 2.35e-05 ***
X3B -2.71389    1.07274  -2.530  0.01141 *
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 1386.29  on 1000  degrees of freedom
Residual deviance:  397.69  on  993  degrees of freedom
AIC: 411.69

Number of Fisher Scoring iterations: 7

It is possible to use Fisher test to test if some coefficients are equal, or not (more generally if some linear constraints are satisfied)

> linearHypothesis(reg,c("X3A=X3C","X3A=X3D","X3B=X3E"))
Linear hypothesis test

Hypothesis:
X3A - X3C = 0
X3A - X3D = 0
- X3E  + X3B = 0

Model 1: restricted model
Model 2: Y ~ 0 + X1 + X2 + X3

  Res.Df Df  Chisq Pr(>Chisq)
1    996                     
2    993  3 0.6191      0.892

Here, we clearly accept the assumption that the first three factors are equal, as well as the last two. What is the next step? Well, if we believe that there are mainly two categories,  and , let us create that factor,

> X3bis=rep(NA,length(X3))
> X3bis[X3%in%c("A","C","D")]="ACD"
> X3bis[X3%in%c("B","E")]="BE"
> db$X3bis=as.factor(X3bis)
> reg=glm(Y~X1+X2+X3bis,family=binomial,data=db)
> summary(reg)

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -4.39439    1.02791  -4.275 1.91e-05 ***
X1           0.51378    0.11138   4.613 3.97e-06 ***
X2           0.20807    0.07234   2.876  0.00402 ** 
X3bisBE      1.94905    0.36852   5.289 1.23e-07 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 552.64  on 999  degrees of freedom
Residual deviance: 398.31  on 996  degrees of freedom
AIC: 406.31

Number of Fisher Scoring iterations: 7

Here, all the categories are significant. So we do have a proper model.

Poisson regression on non-integers

In the course on claims reserving techniques, I did mention the use of Poisson regression, even if incremental payments were not integers. For instance, we did consider incremental triangles

>  source("https://perso.univ-rennes1.fr/arthur.charpentier/bases.R")
>  INC=PAID
>  INC[,2:6]=PAID[,2:6]-PAID[,1:5]
>  INC
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,] 3209 1163   39   17    7   21
[2,] 3367 1292   37   24   10   NA
[3,] 3871 1474   53   22   NA   NA
[4,] 4239 1678  103   NA   NA   NA
[5,] 4929 1865   NA   NA   NA   NA
[6,] 5217   NA   NA   NA   NA   NA

On those payments, it is natural to use a Poisson regression, to predict future payments

>  Y=as.vector(INC)
>  D=rep(1:6,each=6)
>  A=rep(2001:2006,6)
>  base=data.frame(Y,D,A)
>  reg=glm(Y~as.factor(D)+as.factor(A),data=base,family=poisson(link="log"))
>  Yp=predict(reg,type="response",newdata=base)
>  matrix(Yp,6,6)
       [,1]   [,2] [,3] [,4] [,5] [,6]
[1,] 3155.6 1202.1 49.8 19.1  8.2 21.0
[2,] 3365.6 1282.0 53.1 20.4  8.7 22.3
[3,] 3863.7 1471.8 60.9 23.4 10.0 25.7
[4,] 4310.0 1641.8 68.0 26.1 11.2 28.6
[5,] 4919.8 1874.1 77.6 29.8 12.8 32.7
[6,] 5217.0 1987.3 82.3 31.6 13.5 34.7

and the total amount of reserves would be

>  sum(Yp[is.na(Y)==TRUE])
[1] 2426.985

Here, payments were in ‘000 euros. What if they were in ‘000’000 euros ?

> a=1000
> INC/a
      [,1]  [,2]  [,3]  [,4]  [,5]  [,6]
[1,] 3.209 1.163 0.039 0.017 0.007 0.021
[2,] 3.367 1.292 0.037 0.024 0.010    NA
[3,] 3.871 1.474 0.053 0.022    NA    NA
[4,] 4.239 1.678 0.103    NA    NA    NA
[5,] 4.929 1.865    NA    NA    NA    NA
[6,] 5.217    NA    NA    NA    NA    NA

We can still run a regression here

> reg=glm((Y/a)~as.factor(D)+as.factor(A),data=base,family=poisson(link="log"))
> Yp=predict(reg,type="response",newdata=base)
> sum(Yp[is.na(Y)==TRUE])*a
[1] 2426.985

and the prediction is exactly the same. Actually, it is possible to change currency, and multiply by any kind of constant, the Poisson regression will return always the same prediction, if we use a log link function,

>  homogeneity=function(a=1){
+  reg=glm((Y/a)~as.factor(D)+as.factor(A), data=base,family=poisson(link="log"))
+  Yp=predict(reg,type="response",newdata=base)
+  return(sum(Yp[is.na(Y)==TRUE])*a)
+  }
>  Vectorize(homogeneity)(10^(seq(-3,5)))
[1] 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985

The trick here come from the fact that we do like the Poisson interpretation. But GLMs simply mean that we do want to solve a first order condition. It is possible to solve explicitly the first order condition, which was obtained without any condition such that values should be integers. To run a simple code, the intercept should be related to the last value of the matrix, not the first one.

> base$D=relevel(as.factor(base$D),"6")
> base$A=relevel(as.factor(base$A),"2006")
> reg=glm(Y~as.factor(D)+as.factor(A), data=base,family=poisson(link="log"))
> summary(reg)

Call:
glm(formula = Y ~ as.factor(D) + as.factor(A), family = poisson(link = "log"), 
    data = base)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-2.3426  -0.4996   0.0000   0.2770   3.9355  

Coefficients:
                 Estimate Std. Error z value Pr(>|z|)    
(Intercept)       3.54723    0.21921  16.182  < 2e-16 ***
as.factor(D)1     5.01244    0.21877  22.912  < 2e-16 ***
as.factor(D)2     4.04731    0.21896  18.484  < 2e-16 ***
as.factor(D)3     0.86391    0.22827   3.785 0.000154 ***
as.factor(D)4    -0.09254    0.25229  -0.367 0.713754    
as.factor(D)5    -0.93717    0.32643  -2.871 0.004092 ** 
as.factor(A)2001 -0.50271    0.02079 -24.179  < 2e-16 ***
as.factor(A)2002 -0.43831    0.02045 -21.433  < 2e-16 ***
as.factor(A)2003 -0.30029    0.01978 -15.184  < 2e-16 ***
as.factor(A)2004 -0.19096    0.01930  -9.895  < 2e-16 ***
as.factor(A)2005 -0.05864    0.01879  -3.121 0.001799 ** 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 46695.269  on 20  degrees of freedom
Residual deviance:    30.214  on 10  degrees of freedom
  (15 observations deleted due to missingness)
AIC: 209.52

The first idea is to run a gradient descent, as follows (the starting point will be coefficients from a linear regression on the log of the observations),

> YNA <- Y
> XNA=matrix(0,length(Y),1+5+5)
> XNA[,1]=rep(1,length(Y))
>   for(k in 1:5) XNA[(k-1)*6+1:6,k+1]=k
>   u=(1:(length(Y))%%6); u[u==0]=6
>   for(k in 1:5) XNA[u==k,k+6]=k 
>     YnoNA=YNA[is.na(YNA)==FALSE]
>     XnoNA=XNA[is.na(YNA)==FALSE,]
>     beta=lm(log(YnoNA)~0+XnoNA)$coefficients
>     for(s in 1:50){
+     Ypred=exp(XnoNA%*%beta)
+     gradient=t(XnoNA)%*%(YnoNA-Ypred)
+     omega=matrix(0,nrow(XnoNA),nrow(XnoNA));diag(omega)=exp(XnoNA%*%beta) 
+     hessienne=-t(XnoNA)%*%omega%*%XnoNA
+     beta=beta-solve(hessienne)%*%gradient}
> beta
             [,1]
 [1,]  3.54723486
 [2,]  5.01244294
 [3,]  2.02365553
 [4,]  0.28796945
 [5,] -0.02313601
 [6,] -0.18743467
 [7,] -0.50271242
 [8,] -0.21915742
 [9,] -0.10009587
[10,] -0.04774056
[11,] -0.01172840

We are not too far away from the values given by R. Actually, it is just fine if we focus on the predictions

> matrix(exp(XNA%*%beta),6,6))
       [,1]   [,2] [,3] [,4] [,5] [,6]
[1,] 3155.6 1202.1 49.8 19.1  8.2 21.0
[2,] 3365.6 1282.0 53.1 20.4  8.7 22.3
[3,] 3863.7 1471.8 60.9 23.4 10.0 25.7
[4,] 4310.0 1641.8 68.0 26.1 11.2 28.6
[5,] 4919.8 1874.1 77.6 29.8 12.8 32.7
[6,] 5217.0 1987.3 82.3 31.6 13.5 34.7

which are exactly the one obtained above. And here, we clearly see that there is no assumption such as “explained variate should be an integer“. It is also possible to remember that the first order condition is the same as the one we had with a weighted least square model. The problem is that the weights are function of the prediction. But using an iterative algorithm, we should converge,

> beta=lm(log(YnoNA)~0+XnoNA)$coefficients
>  for(i in 1:50){
+ Ypred=exp(XnoNA%*%beta)
+  z=XnoNA%*%beta+(YnoNA-Ypred)/Ypred
+  REG=lm(z~0+XnoNA,weights=Ypred)
+  beta=REG$coefficients
+ }
> 
> beta
     XnoNA1      XnoNA2      XnoNA3      XnoNA4      XnoNA5      XnoNA6
 3.54723486  5.01244294  2.02365553  0.28796945 -0.02313601 -0.18743467
     XnoNA7      XnoNA8      XnoNA9     XnoNA10     XnoNA11 
-0.50271242 -0.21915742 -0.10009587 -0.04774056 -0.01172840

which are the same values as the one we got previously. Here again, the prediction is the same as the one we got from this so-called Poisson regression,

> matrix(exp(XNA%*%beta),6,6)
       [,1]   [,2] [,3] [,4] [,5] [,6]
[1,] 3155.6 1202.1 49.8 19.1  8.2 20.9
[2,] 3365.6 1282.0 53.1 20.4  8.7 22.3
[3,] 3863.7 1471.8 60.9 23.4 10.0 25.7
[4,] 4310.0 1641.8 68.0 26.1 11.2 28.6
[5,] 4919.8 1874.1 77.6 29.8 12.8 32.7
[6,] 5217.0 1987.3 82.3 31.6 13.5 34.7

Again, it works just fine because GLMs are mainly conditions on the first two moments, and numerical computations are based on the first order condition, which has less constraints than the interpretation in terms of a Poisson model.

Multiple (smoothed) regression and portfolio exposure

Wednesday, in class, we’ve seen how to visualize a multiple regression model (with two continuous explanatory variables). Here, the goal is to predict the average cost of an insurance claim, using some covariates, e.g. the age of the driver, and the age of the car (recall that losses here are liability losses). The prediction obtained from a (standard) generalized linear model, with a log-link

> reg1=glm(cout~ageconducteur+agevehicule,data=base,family=Gamma(link="log"))

The code to visualize the predicted average cost is the following: first, we have to compute predictions for specific values,

> pred=function(x,y){
+ predict(reg,newdata=data.frame(ageconducteur=x,
+ agevehicule=y),type="response")

Then, we use this function to compute values on a grid,

> X=seq(20,80,by=5)
> Y=0:20
> Z=outer(X,Y,p)
> image(X,Y,Z,col=rev(heat.colors(101)))
> contour(X,Y,Z,add=TRUE,
+ levels=c(1400,1800,2000,2200,2400,2600,2800,3000,3200,4000,5000))

If we use factors, and not continuous variates (cut versions of those two variates),

> reg2=glm(cout~cut(ageconducteur,breaks=c(0,22,35,55,80,100))*
+               cut(agevehicule,breaks=c(-1,1,3,5,10,100)),
+ data=base,family=Gamma(link="log"))

(note that we consider the Cartesian product, so values are computed for each product of factors, age of the driver and age of the car) we obtain

Obviously, we’re missing something here: the most expensive class with one model is the cheapeast for the other one! Of course, it might come from our classes (that were chosen a bit randomly), but it might be interesting to use nonlinear functions of the ages. So, let us use splines to smooth those two variables,

> reg3=glm(cout~bs(ageconducteur)+bs(agevehicule),data=base,
+ family=Gamma(link="log"))

With additive smoothed functions, we obtained a symmetric graph (due to the additive property)

while with a bivariate spline

> library(mgcv)
+ reg4=gam(cout~s(ageconducteur,agevehicule),data=base,
+ family=Gamma(link="log"))

(for some odd reasons, I could not use – easily – a bivariate spline in the Generalized Linear Model, but it did work considering a Generalized Additive Model – which is, by no means additive now). We can identify here some regions where the average cost can be extremely expensive… But, as mentioned wednesday, one should keep in mind that some parts of the square above are not reached. More precisely, the distribution of the portfolio, as a function of those two covariates is the following

Thus, the proportion of young drivers driving a brand new car, and the proportion of old drivers driving a very old car is rather small… If the goal is to find niches, one should look at the prediction more carefully, but if the goal is to make that everyone gets an insurance cover, maybe we should allow that some drivers are under-priced (especially when are rare in the portfolio). And one should keep in mind that average costs are extremely sensitive to large losses, as discussed previously http://freakonometrics.hypotheses.org/3490 (and in class)

In the univariate case, I have migrated an old post, we I tried to reproduce (in R and in French) some standard graphs in the insurance industry: it is always interesting to visualize not only the prediction obtained from our models, but also the size of each class in the portfolio,

The post is online here http://freakonometrics.hypotheses.org/1224

Further readings on GLMs and ratemaking

Some articles found in Actuarial journal, on ratemarking,

and in the CAS forums, and Astin conference papers

Modélisation des coûts individuels en tarification

Avant de terminer le cours sur la tarification, on va parler de la modélisation des coûts individuels. On parlera de lois Gamma et de lois lognormales (sur cette dernière, je suggère de relire ce qui avait été dit dans le cours de modèles de régression sur les modèles log-linéaires, rappelé dans un court billet publié à l’automne). On parlera aussi de mélanges de lois, et de lois multinomiales. Les transparents sont en ligne ici,

Pour aller plus loin, il y a l’article de Fu & Moncher (2004) sur la comparaison Gamma versus lognormale, http://casact.org/… ou Holler, Sommer & Trahair (1999) http://casact.org/… qui proposait un état de l’art, il y a une quinzaine d’années. Sinon, je recommande la lecture du Practitioner’s Guide to Generalized Linear Models, en ligne sur http://casact.org/….

Visualizing overdispersion (with trees)

This week, we started to discuss overdispersion when modeling claims frequency. In my previous post, I discussed computations of empirical variances with different exposure. But I did use only one factor to compute classes. Of course, it is possible to use much more factors. For instance, using cartesian products of factors,

> X=as.factor(paste(sinistres$carburant,sinistres$zone,
+ cut(sinistres$ageconducteur,breaks=c(17,24,40,65,101))))
> E=sinistres$exposition
> Y=sinistres$nbre
> vm=vv=ve=rep(NA,length(levels(X)))
>   for(i in 1:length(levels(X))){
+  	   ve[i]=Ei=E[X==levels(X)[i]]
+  	   Yi=Y[X==levels(X)[i]]
+   vm[i]=meani=weighted.mean(Yi/Ei,Ei)    # moyenne 
+   vv[i]=variancei=sum((Yi-meani*Ei)^2)/sum(Ei)    # variance
+  cat("Class ",levels(X)[i],"average =",meani," variance =",variancei,"\n")
+ }
Class D A (17,24]  average = 0.06274415  variance = 0.06174966 
Class D A (24,40]  average = 0.07271905  variance = 0.07675049 
Class D A (40,65]  average = 0.05432262  variance = 0.06556844 
Class D A (65,101] average = 0.03026999  variance = 0.02960885 
Class D B (17,24]  average = 0.2383109   variance = 0.2442396 
Class D B (24,40]  average = 0.06662015  variance = 0.07121064 
Class D B (40,65]  average = 0.05551854  variance = 0.05543831 
Class D B (65,101] average = 0.0556386   variance = 0.0540786 
Class D C (17,24]  average = 0.1524552   variance = 0.1592623 
Class D C (24,40]  average = 0.0795852   variance = 0.09091435 
Class D C (40,65]  average = 0.07554481  variance = 0.08263404 
Class D C (65,101] average = 0.06936605  variance = 0.06684982 
Class D D (17,24]  average = 0.1584052   variance = 0.1552583 
Class D D (24,40]  average = 0.1079038   variance = 0.121747 
Class D D (40,65]  average = 0.06989518  variance = 0.07780811 
Class D D (65,101] average = 0.0470501   variance = 0.04575461 
Class D E (17,24]  average = 0.2007164   variance = 0.2647663 
Class D E (24,40]  average = 0.1121569   variance = 0.1172205 
Class D E (40,65]  average = 0.106563    variance = 0.1068348 
Class D E (65,101] average = 0.1572701   variance = 0.2126338 
Class D F (17,24]  average = 0.2314815   variance = 0.1616788 
Class D F (24,40]  average = 0.1690485   variance = 0.1443094 
Class D F (40,65]  average = 0.08496827  variance = 0.07914423 
Class D F (65,101] average = 0.1547769   variance = 0.1442915 
Class E A (17,24]  average = 0.1275345   variance = 0.1171678 
Class E A (24,40]  average = 0.04523504  variance = 0.04741449 
Class E A (40,65]  average = 0.05402834  variance = 0.05427582 
Class E A (65,101] average = 0.04176129  variance = 0.04539265 
Class E B (17,24]  average = 0.1114712   variance = 0.1059153 
Class E B (24,40]  average = 0.04211314  variance = 0.04068724 
Class E B (40,65]  average = 0.04987117  variance = 0.05096601 
Class E B (65,101] average = 0.03123003  variance = 0.03041192 
Class E C (17,24]  average = 0.1256302   variance = 0.1310862 
Class E C (24,40]  average = 0.05118006  variance = 0.05122782 
Class E C (40,65]  average = 0.05394576  variance = 0.05594004 
Class E C (65,101] average = 0.04570239  variance = 0.04422991 
Class E D (17,24]  average = 0.1777142   variance = 0.1917696 
Class E D (24,40]  average = 0.06293331  variance = 0.06738658 
Class E D (40,65]  average = 0.08532688  variance = 0.2378571 
Class E D (65,101] average = 0.05442916  variance = 0.05724951 
Class E E (17,24]  average = 0.1826558   variance = 0.2085505 
Class E E (24,40]  average = 0.07804062  variance = 0.09637156 
Class E E (40,65]  average = 0.08191469  variance = 0.08791804 
Class E E (65,101] average = 0.1017367   variance = 0.1141004 
Class E F (17,24]  average = 0           variance = 0 
Class E F (24,40]  average = 0.07731177  variance = 0.07415932 
Class E F (40,65]  average = 0.1081142   variance = 0.1074324 
Class E F (65,101] average = 0.09071118  variance = 0.1170159

Again, one can plot the variance against the average,

> plot(vm,vv,cex=sqrt(ve),col="grey",pch=19,
+ xlab="Empirical average",ylab="Empirical variance")
> points(vm,vv,cex=sqrt(ve))
> abline(a=0,b=1,lty=2)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-13.58.26.png

An alternative is to use a tree. The tree can be obtained from another variable (the insured had, or had not, a claim, during the period considered) but it should be rather close to the one we would like to model (the number of claims over the period considered). Here, I did use the whole database (with more that 600,000 lines)

> library(tree)
> T=tree((nombre>0)~as.factor(zone)+as.factor(puissance)+
+ as.factor(marque)+as.factor(carburant)+as.factor(region)+
+ agevehicule+ageconducteur,data=baseFREQ,
+ split =  "gini",minsize =25000)

The tree is the following

> plot(T)
> text(T)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-13.55.13.png

Now, each knot defines a class, and it is possible to use it to define a class. Which is supposed to be homogeneous.

> X=as.factor(T$where)
> E=sinistres$exposition
> Y=sinistres$nbre
> vm=vv=ve=rep(NA,length(levels(X)))
>   for(i in 1:length(levels(X))){
+  	   ve[i]=Ei=E[X==levels(X)[i]]
+  	   Yi=Y[X==levels(X)[i]]
+   vm[i]=meani=weighted.mean(Yi/Ei,Ei)    # moyenne 
+   vv[i]=variancei=sum((Yi-meani*Ei)^2)/sum(Ei)    # variance
+  cat("Class ",levels(X)[i],"average =",meani," variance =",variancei,"\n")
+  }
Class  6 average =   0.04010406  variance = 0.04424163 
Class  8 average =   0.05191127  variance = 0.05948133 
Class  9 average =   0.07442635  variance = 0.08694552 
Class  10 average =  0.4143646   variance = 0.4494002 
Class  11 average =  0.1917445   variance = 0.1744355 
Class  15 average =  0.04754595  variance = 0.05389675 
Class  20 average =  0.08129577  variance = 0.0906322 
Class  22 average =  0.05813419  variance = 0.07089811 
Class  23 average =  0.06123807  variance = 0.07010473 
Class  24 average =  0.06707301  variance = 0.07270995 
Class  25 average =  0.3164557   variance = 0.2026906 
Class  26 average =  0.08705041  variance = 0.108456 
Class  27 average =  0.06705214  variance = 0.07174673 
Class  30 average =  0.05292652  variance = 0.06127301 
Class  31 average =  0.07195285  variance = 0.08620593 
Class  32 average =  0.08133722  variance = 0.08960552 
Class  34 average =  0.1831559   variance = 0.2010849 
Class  39 average =  0.06173885  variance = 0.06573939 
Class  41 average =  0.07089419  variance = 0.07102932 
Class  44 average =  0.09426152  variance = 0.1032255 
Class  47 average =  0.03641669  variance = 0.03869702 
Class  49 average =  0.0506601   variance = 0.05089276 
Class  50 average =  0.06373107  variance = 0.06536792 
Class  51 average =  0.06762947  variance = 0.06926191 
Class  56 average =  0.06771764  variance = 0.07122379 
Class  57 average =  0.04949142  variance = 0.05086885 
Class  58 average =  0.2459016   variance = 0.2451116 
Class  59 average =  0.05996851  variance = 0.0615773 
Class  61 average =  0.07458053  variance = 0.0818608 
Class  63 average =  0.06203737  variance = 0.06249892 
Class  64 average =  0.07321618  variance = 0.07603106 
Class  66 average =  0.07332127  variance = 0.07262425 
Class  68 average =  0.07478147  variance = 0.07884597 
Class  70 average =  0.06566728  variance = 0.06749411 
Class  71 average =  0.09159605  variance = 0.09434413 
Class  75 average =  0.03228927  variance = 0.03403198 
Class  76 average =  0.04630848  variance = 0.04861813 
Class  78 average =  0.05342351  variance = 0.05626653 
Class  79 average =  0.05778622  variance = 0.05987139 
Class  80 average =  0.0374993   variance = 0.0385351 
Class  83 average =  0.06721729  variance = 0.07295168 
Class  86 average =  0.09888492  variance = 0.1131409 
Class  87 average =  0.1019186   variance = 0.2051122 
Class  88 average =  0.05281703  variance = 0.0635244 
Class  91 average =  0.08332136  variance = 0.09067632 
Class  96 average =  0.07682093  variance = 0.08144446 
Class  97 average =  0.0792268   variance = 0.08092019 
Class  99 average =  0.1019089   variance = 0.1072126 
Class  100 average = 0.1018262   variance = 0.1081117 
Class  101 average = 0.1106647   variance = 0.1151819 
Class  103 average = 0.08147644  variance = 0.08411685 
Class  104 average = 0.06456508  variance = 0.06801061 
Class  107 average = 0.1197225   variance = 0.1250056 
Class  108 average = 0.0924619   variance = 0.09845582 
Class  109 average = 0.1198932   variance = 0.1209162

Here, when ploting the empirical variance (per knot) against the empirial average of claims, we get

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.05.08.png

Here, we can identify classes where remaining heterogeneity.

Large claims, and ratemaking

During the course, we have seen that it is natural to assume that not only the individual claims frequency can be explained by some covariates, but individual costs too. Of course, appropriate families should be considered to model the distribution of the cost https://latex.codecogs.com/gif.latex?Y, given some covariates https://latex.codecogs.com/gif.latex?\boldsymbol{X}.Here is the dataset we’ll use,

>  sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt",
+  header=TRUE,sep=";")
>  sinistres=sinistre[sinistre$garantie=="1RC",]
>  sinistres=sinistres[sinistres$cout>0,]
>  contrat=read.table("http://freakonometrics.free.fr/contractACT2040.txt",
+  header=TRUE,sep=";")
>  couts=merge(sinistres,contrat)
> tail(couts)
     nocontrat    no garantie    cout exposition zone puissance agevehicule
1919   6104006 11933      1RC 5376.04       0.37    E         6           1
1920   6107355 12349      1RC   51.63       0.74    E         4           1
1921   6108364 13229      1RC 1320.00       0.74    B         9           1
1922   6109171 11567      1RC 1320.00       0.74    B        13           1
1923   6111208 14161      1RC  970.20       0.49    E        10           5
1924   6111650 14476      1RC 1940.40       0.48    E         4           0
     ageconducteur bonus marque carburant densite region
1919            32    57     12         E      93     10
1920            45    57     12         E      72     10
1921            32   100     12         E      83      0
1922            56    50     12         E      93     13
1923            30    90     12         E      53      2
1924            69    50     12         E      93     13

Here, each line is a claim. Usual families to model the cost are the Gamma distribution, or the inverse Gaussian. Or the lognormal distribution (which is not in the exponential family, but one can assume that the logarithm of the cost can be modeled with a Gaussian distribution). Consider here only one covariate, e.g. the age of the car, and two different models: a Gamma one, and a lognormal one.

> age=0:20
> reggamma.sp <- glm(cout~agevehicule,family=Gamma(link="log"),
+ data=couts)
> Pgamma <- predict(reggamma.sp,newdata=data.frame(agevehicule=age),type="response")

For the Gamma regression, it is a simple GLM, so it is not difficult. For a lognormal distribution, one should remember that the expected value of a lognormal distribution is not the exponential of the underlying Gaussian distribution. A correction should be made, here to get an unbiased estimator for the average cost,

> reglm.sp <- lm(log(cout)~agevehicule,data=baseCOUT)
> sigma <- summary(reglm.sp)$sigma
> mu <- predict(reglm.sp,newdata=data.frame(agevehicule=age))
> Pln <- exp(mu+sigma^2/2)

We can plot those two predictions on a single graph,

> plot(age,Pgamma,xlab="",ylab="",col="red",type="b",pch=4)
> lines(age,Pln,col="blue",type="b")

Here it is,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.18.56.png

Observe that it is also possible to use splines, since there might be no reason for the age to appear here in a multiplicative way,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.25.52.png

Here, the two models are rather close. Nevertheless, one should remember that the Gamma model can be extremely sensitive to large claims (I mean here really large claims). On the other hand, with the log-transformation for the lognormal model, it seams that this model is less sensitive to large events. Actually, if I use the complete dataset, the regressions are the following,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.19.44.png

i.e. with a lognormal distribution, the average cost is decreasing with the age of the car, while it is increasing with a Gamma model. The main reason here is that there is one large (not to say huge) claim in the dataset,

> couts[which.max(couts$cout),]
         cout exposition zone puissance agevehicule ageconducteur
7842  4024601       0.22    B         9          13            19
     marque carburant densite region
7842      2         E      93     24

One young driver got a $ 4 million claim, with a 13 year old car. This is an outliers for the Gamma regression, that clearly influences the estimation (the second largest if only one third of this one). Since there is a clear influence of large claims on the estimation of the average cost, a natural idea might be to remove those large claims. Or perhaps to see them as different from normal claims: normal claims can be explained by some covariates, but perhaps that those large claims should be shared not only within its own class, but within all the insured on the portfolio. To formalize this idea, observe that we can write

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s|\boldsymbol{X})}_{B}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s,%20\boldsymbol{X})%20}_{C}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s|%20\boldsymbol{X})}_{B}}}}

where the blue part is associated to normal-sized claims, while large ones correspond to the red part. It is then possible to run three regressions: one on normal sized claims, one on large claims, and one on the indicator of having a large claims, given that a claim occurred. The code here is something like that: a large claim – here – is above $ 10,000 (one has a fix it)

> s= 10000
> couts$normal=(couts$cout<=s)
> mean(couts$normal)
[1] 0.9818087

which represent 2% of the claims in our dataset.We can run 3 sets of regressions, with smoothed regression on the age of the car. The first one to model large claims individual costs,

> indice = which(couts$cout>s)
> mean(couts$cout[indice])
[1] 34471.59
> library(splines)
> regB=glm(cout~bs(agevehicule),data=couts,
+ subset=indice,family=Gamma(link="log"))
> ypB=predict(regB,newdata=data.frame(agevehicule=age),type="response")
> ypB2=mean(couts$cout[indice])

the second one to model normal claims individual costs,

> indice = which(couts$cout<=s)
> mean(couts$cout[indice])
[1] 1335.878
> regA=glm(cout~bs(agevehicule),data=couts,
+ subset=indice,family=Gamma(link="log"))
> ypA=predict(regA,newdata=data.frame(agevehicule=age),type="response")
> ypA2=mean(couts$cout[indice])

And finally, a third one, on the probability of having a normal sized claim, given that a claim occurred

> regC=glm(normal~bs(agevehicule),data=couts,family=binomial)
> ypC=predict(regC,newdata=data.frame(agevehicule=age),type="response")
> regC2=glm(normal~1,data=couts,family=binomial)
> ypC2=predict(regC2,newdata=data.frame(agevehicule=age),type="response")

Note that we to have, each time something that can be interpreted either as https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X},Y\gtrless%20%20s), or https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|Y\gtrless%20%20s) – i.e. no covariate is considered on the later. On the graph below, we did plot

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s|\boldsymbol{X})}_{B}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s,%20\boldsymbol{X})%20}_{C}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s|%20\boldsymbol{X})}_{B}}}}

where Gamma regressions – with splines – are considered for the average costs, while logistic regressions – again with splines – are considered to model probabilities.

http://freakonometrics.hypotheses.org/files/2013/02/ecret-ABC-v2.gif

(but careful with splines: on borders, since we do not have a lot of observations, the behavior can be… odd. And adjustments should be made to obtain an adequate level of premium).  If it is legitimate to assume that normal-sized claims can be explained by some covariates, perhaps large claims (or extremely large ones) are just purely random, i.e. not function of any covariate, at all. I.e.

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s|\boldsymbol{X})}_{B}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s)%20}_{C%27}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s|%20\boldsymbol{X})}_{B}}}}

http://freakonometrics.hypotheses.org/files/2013/02/ecret-AB2C-v2.gif

To go one step further, it might also be possible to assume that not only the size of the claim (given that it is a large one) is not a function of any covariate, but perhaps neither is the probability of having an extremely large claim, too

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s)}_{B%27}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s)%20}_{C%27}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s)}_{B%27}}}}

http://freakonometrics.hypotheses.org/files/2013/02/ecret-AB2C2-v2.gif

From the first part, we’ve seen that the distribution considered had an impact on the prediction, and in the second, we’ve seen that the definition of large claims (and how to deal with them) also has an impact. So clearly, actuaries have some leverage when working on ratemaking…

Exposure with binomial responses

Last week, we’ve seen how to take into account the exposure to compute nonparametric estimators of several quantities (empirical means, and empirical variances) incorporating exposure. Let us see what can be done if we want to model a binomial response. The model here is the following: ,

  • the number of claims https://latex.codecogs.com/gif.latex?N_i on the period https://latex.codecogs.com/gif.latex?[0,1] is unobserved
  • the number of claims https://latex.codecogs.com/gif.latex?Y_i on https://latex.codecogs.com/gif.latex?[0,E_i] is observed (as well as https://latex.codecogs.com/gif.latex?E_i)

that can be visualize below

http://f.hypotheses.org/wp-content/blogs.dir/253/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-09.30.00.png

Consider the case where the variable of interest is not the number of claims, but simply the indicator of the occurrence of a claim. Then we wish to model the event https://latex.codecogs.com/gif.latex?\{N=0\} versus https://latex.codecogs.com/gif.latex?\{N%3E0\}, interpreted as non-occurrence and occurrence. Given the fact that we can only observe https://latex.codecogs.com/gif.latex?\{Y=0\} versus https://latex.codecogs.com/gif.latex?\{Y%3E0\}. Having an inclusion is not enough to derive a model. Actually, with a Poisson process model, we can get easily that

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0)%20=%20\mathbb{P}(N=0)^E

With words, it means that the probability of not having a claim in the first six months of the year is the square root of not have a claim over a year. Which makes sense. Assume that the probability of not having a claim can be explained by some covariates, denoted https://latex.codecogs.com/gif.latex?\boldsymbol{X}, through some link function (using the GLM terminology),

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=0|\boldsymbol{X})=h(\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})

Now, since we do observe https://latex.codecogs.com/gif.latex?Y – and not https://latex.codecogs.com/gif.latex?N – we have

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0|\boldsymbol{X},E)=h(\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})^E

The dataset we will use is always the same

> sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt",
+ header=TRUE,sep=";")
> sinistres=sinistre[sinistre$garantie=="1RC",]
> sinistres=sinistres[sinistres$cout>0,]
> contrat=read.table("http://freakonometrics.free.fr/contractACT2040.txt",
+ header=TRUE,sep=";")
> T=table(sinistres$nocontrat)
> T1=as.numeric(names(T))
> T2=as.numeric(T)
> nombre1 = data.frame(nocontrat=T1,nbre=T2)
> I = contrat$nocontrat%in%T1
> T1= contrat$nocontrat[I==FALSE]
> nombre2 = data.frame(nocontrat=T1,nbre=0)
> nombre=rbind(nombre1,nombre2)
> sinistres = merge(contrat,nombre)
> sinistres$nonsin = (sinistres$nbre==0)

The first model we can consider is based on the standard logistic approach, i.e.

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0|\boldsymbol{X},E)=\left(\frac{\exp(\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})}{1+\exp(\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})}\right)^E

That’s nice, but difficult to handle with standard functions. Nevertheless, it is always possible to compute numerically the maximum likelihood estimator of https://latex.codecogs.com/gif.latex?\boldymbol{\beta} given https://latex.codecogs.com/gif.latex?(Y_i,\boldsymbol{X}_i,E_i).

> Y=sinistres$nonsin
> X=cbind(1,sinistres$ageconducteur)
> E=sinistres$exposition
> logL = function(beta){
+ 	pi=(exp(X%*%beta)/(1+exp(X%*%beta)))^E
+ 	-sum(log(dbinom(Y,size=1,prob=pi)))
+ }
> optim(fn=logL,par=c(-0.0001,-.001),
+ method="BFGS")
$par
[1] 2.14420560 0.01040707
$value
[1] 7604.073
$counts
function gradient 
      42       10 
$convergence
[1] 0
$message
NULL
> parametres=optim(fn=logL,par=c(-0.0001,-.001),
+ method="BFGS")$par

Now, let us look at alternatives, based on standard regression models. For instance a binomial-log model. Because the exposure appears as a power of the annual probability, everything would be fine if https://latex.codecogs.com/gif.latex?h was the exponential function (or https://latex.codecogs.com/gif.latex?h^{-1} was the log link function), since

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0|\boldsymbol{X},E)=\exp(E+\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})

Now, if we try to code it, it starts quickly to be problematic,

> reg=glm(nonsin~ageconducteur+offset(exposition),
+ data=sinistresI,family=binomial(link="log")) 
Error: no valid set of coefficients has been found: please supply starting values

I tried (almost) everything I could, but I could not get rid of that error message,

> startglm=c(0,-.001)
> names(startglm)=c("(Intercept)","ageconducteur")
> etaglm=rep(-.01,nrow(sinistresI))
> etaglm[sinistresI$nonsin==0]=-10
> muglm=exp(etaglm)
> reg=glm(nonsin~ageconducteur+offset(exposition),
+ data=sinistresI,family=binomial(link="log"),
+ control = glm.control(epsilon=1e-5,trace=TRUE,maxit=50),
+ start=startglm,
+ etastart=etaglm,mustart=muglm)
Deviance = NaN Iterations - 1 
Error: no valid set of coefficients has been found: please supply starting values

So I decided to give up. Almost. Actually, the problem comes from the fact that https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0) is closed to 1. I guess everything would be nicer if we could work with probability close to 0. Which is possible, since

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y%3E0)=1-\mathbb{P}(Y=0)%20=%201-[1-\mathbb{P}(N%3E0)]^E

where https://latex.codecogs.com/gif.latex?\mathbb{P}(N%3E0) is close to 0. So we can use Taylor’s expansion,

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y%3E0)\sim1-1+E\cdot%20\mathbb{P}(N%3E0)]=E\cdot%20\mathbb{P}(N%3E0)]

Here, the exposure does no longer appears as a power of the probability, but appears multiplicatively. Of course, there are higher order terms. But let us forget them (so far). If – one more time – we consider a log link function, then we can incorporate the exposure, or to be more specific, the logarithm of the exposure.

> regopp=glm((1-nonsin)~ageconducteur+offset(log(exposition)),
+ data=sinistresI,family=binomial(link="log"))

which now works perfectly.

Now, to see a final model, perhaps we should get back to our Poisson regression model since we do have a model for the probability that https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=\cdot).

> regpois=glm(nbre~ageconducteur+offset(log(exposition)),
+ data=sinistres,family=poisson(link="log"))

We can now compare those three models. Perhaps, we should also include the prediction without any explanatory variable. For the second model (actually, it does run without any explanatory variable), we run

>  regreff=glm((1-nonsin)~1+offset(log(exposition)),
+ data=sinistres,family=binomial(link="log"))

so that the prediction is here

> exp(coefficients(regreff))
(Intercept) 
 0.06776376

This value is comparable with the logistic regression,

> logL2 = function(beta){
+ 	pi=(exp(beta)/(1+exp(beta)))^E
+ 	-sum(log(dbinom(Y,size=1,prob=pi)))}
> param=optim(fn=logL2,par=.01,method="BFGS")$par
> 1-exp(param)/(1+exp(param))
[1] 0.06747777

But is quite different from the Poisson model,

> exp(coefficients(glm(nbre~1+offset(log(exposition)),
+ data=sinistres,family=poisson(link="log"))))
(Intercept) 
 0.07279295

Let us produce a graph, to compare those models,

> age=18:100
> yml1=exp(parametres[1]+parametres[2]*age)/(1+exp(parametres[1]+parametres[2]*age))
> plot(age,1-yml1,type="l",col="purple")
> yp=predict(regpois,newdata=data.frame(ageconducteur=age,
+ exposition=1),type="response")
> yp1=1-exp(-yp)
> ydl=predict(regopp,newdata=data.frame(ageconducteur=age,
+ exposition=1),type="response")
> plot(age,ydl,type="l",col="red")
> lines(age,yp1,type="l",col="blue")
> lines(age,1-yml1,type="l",col="purple")
> abline(h=exp(coefficients(regreff)),lty=2)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-08-a%CC%80-14.55.591.png

Observe here that the three models are quite different. Actually, with two models, it is possible to run more complex regression, e.g. with splines, to visualize the impact of the age on the probability of having – or not – a car accident. If we compare the Poisson regression (still in red) and the log-binomial model, with Taylor’s expansion, we get

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-08-a%CC%80-14.39.08.png

The next step is to see how to incorporate the exposure in a tree. But that’s another story…