Category Archives: Ratemaking

Actuariat de l’Assurance non-Vie #4

Lundi prochain, suite du cours d’actuariat de l’assurance non-vie. Nous avons terminé la partie sur la classification (modèle logistique, arbres, forêts, bagging, etc), et nous allons aborder la section sur la modélisation de la fréquence, et la régression de Poisson. Je rajouter quelques slides sur la présentation des GLM, qui seront utiles pour parler un peu de sur-dispersion,

 

Pricing Game, c’est reparti

Pour la seconde année consécutive, on organise un pricing game dans le cadre de la formation Data Science pour l’Actuariat. Les instructions sont en ligne. Pour information, il y a trois bases en ligne

  • train_contrats:  87 228 observations, 22 variables. Base contenant les informations relatives aux polices d’assurance
  • train_sinistres:  4 568 observations, 9 variables. Base contenant les informations relatives aux sinistres observés
  • test_contrats32 772 observations, 21 variables. Base contenant les informations relatives aux polices d’assurance pour lesquelles il conviendra de proposer une prime

La première étape du jeu va durer 2 mois (entre le 15 juillet et le 15 septembre). Il s’agira de proposer 32 772 primes d’assurance. Il est impératif de proposer une prime pour tous les contrats. Autre information supplémentaire, pour tous les joueurs, la somme des primes sera égalisée à 2,5 millions d’euros.

Petite nouveauté, il y aura une seconde étape ensuite. Chaque participant recevra la prime de deux de ses concurrents, et aura la possibilité de mettre à jour ses primes, et de renvoyer des primes mises à jour d’ici le 20 septembre.

Les commentaires sont ouverts pour tout complément d’information ! J’essayerais de poster des informations complémentaires, en particulier sur la description des variables, au cours des vacances. L’analyse du jeu sera proposée lors de la journée 100% Data Science de l’Institut des Actuaires (qui aura lieu le 8 novembre 2016). Bon courage en attendant !

Pricing Game, the results

Thursday, I will be in Paris, to discuss the results we got from the pricing game. I will present 12 ou 13 models sent to me, an discuss what happened when I created a market, where the models were competing. One or two models were clearly underestimating the losses, so with the results as they were send, each time, one company goy 80% market share and over 250% loss ratio. So I decided to normalize all the premiums, so that the average premium was the same, for all the companies. Slides are now available.

Predictive Modeling

Tomorrow, around noon, I will be giving a talk on predictive modeling for actuaries. In the introduction, I will get back shortly on the idea that a prediction is usually a best estimate, in the sense of getting an expected value. And because

https://latex.codecogs.com/gif.latex?\mathbb{E}(X)=\underset{c\in\mathbb{R}}{\text{argmin}}\{\mathbb{E}\left([X-c]^2\right)\}=\underset{c\in\mathbb{R}}{\text{argmin}}\{\mathbb{E}\left(||X-c||_{L_2}\right)\}

it is natural to use least square ideas. In order to illustrate all those concepts, we will use a simple dataset, with the sex, the height and the weight of a person, as well as declared weight.

Davis=read.table(
"http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")

Since there is a typo in this dataset, we have to invert to figures

Davis[12,c(2,3)]=Davis[12,c(3,2)]

but it’s not a big deal. The variable of interest, here, is someone’s weight

attach(Davis)
Y=weight*2.204622

(here in pounds). We will use explanatory variables such as the sex of that person, or his/her height

X=Davis$height / 30.48

(in inches). So, we will start with the (standard) linear model, just to make sure that we all talk about the same thing.

The goal will be to use (possible) explanatory variable to improve our prediction. We will start with the standard linear model, but we will see that nonlinear models can also easily be obtained,

Non linearities will be discussed. But those models are Gaussian (as mentioned above). And homoscedastic. So we will see how generalized linear models can be used to model the mean and the variance, at the same time. For instance, with a Poisson regression (below), the variance will increase with the expected value.

After this general introduction, we will spend some time on 0-1 variables. We will see how to use a logistic regression, and also discuss more generally which kind of models can be used for classification. ROC curves will be presented, and explained.

Then, we will also see an alternative to the logistic model, namely classification trees and CART techniques

We will also discuss random forrests, bagging and boosting techniques

pdf version of the slides can be downloaded.

Smoothing mortality rates

This morning, I was working with Julie, a student of mine, coming from Rennes, on mortality tables. Actually, we work on genealogical datasets from a small region in Québec, and we can observe a lot of volatiliy. If I borrow one of her graph, we get something like

Since we have some missing data, we wanted to use some Generalized Nonlinear Models. So let us see how to get a smooth estimator of the mortality surface.  We will write some code that we can use on our data later on (the dataset we have has been obtained after signing a lot of official documents, and I guess I cannot upload it here, even partially).

DEATH <- read.table(
"http://freakonometrics.free.fr/Deces-France.txt",
header=TRUE)
EXPO  <- read.table(
"http://freakonometrics.free.fr/Exposures-France.txt",
header=TRUE,skip=2)
library(gnm)
D=DEATH$Male
E=EXPO$Male
A=as.numeric(as.character(DEATH$Age))
Y=DEATH$Year
I=(A<100)
base=data.frame(D=D,E=E,Y=Y,A=A)
subbase=base[I,]
subbase=subbase[!is.na(subbase$A),]

The first idea can be to use a Poisson model, where the mortality rate is a smooth function of the age and the year, something like

that can be estimated using

library(mgcv)
regbsp=gam(D~s(A,Y,bs="cr")+offset(log(E)),data=subbase,family=quasipoisson)
predmodel=function(a,y) predict(regbsp,newdata=data.frame(A=a,Y=y,E=1))
vX=trunc(seq(0,99,length=41))
vY=trunc(seq(1900,2005,length=41))
vZ=outer(vX,vY,predmodel)
persp(vZ,theta=-30,col="green",shade=TRUE,xlab="Ages (0-100)",
ylab="Years (1900-2005)",zlab="Mortality rate (log)")

The mortality surface is here

It is also possible to extract the average value of the years, which is the interpretation of the  coefficient in the Lee-Carter model,

predAx=function(a) mean(predict(regbsp,newdata=data.frame(A=a,
Y=seq(min(subbase$Y),max(subbase$Y)),E=1)))
plot(seq(0,99),Vectorize(predAx)(seq(0,99)),col="red",lwd=3,type="l")

We have the following smoothed mortality rate

Recall that the Lee-Carter model is

where parameter estimates can be obtained using

regnp=gnm(D~factor(A)+Mult(factor(A),factor(Y))+offset(log(E)),
data=subbase,family=quasipoisson)
predmodel=function(a,y) predict(regnp,newdata=data.frame(A=a,Y=y,E=1))
vZ=outer(vX,vY,predmodel)
persp(vZ,theta=-30,col="green",shade=TRUE,xlab="Ages (0-100)",
ylab="Years (1900-2005)",zlab="Mortality rate (log)")

The (crude) mortality surface is

with the following  coefficients.

plot(seq(1,99),coefficients(regnp)[2:100],col="red",lwd=3,type="l")

Here we have a lot of coefficients, and unfortunately, on a smaller dataset, we have much more variability. Can we smooth our Lee-Carter model ? To get something which looks like

Actually, we can, and the code is rather simple

library(splines)
knotsA=c(20,40,60,80)
knotsY=c(1920,1945,1980,2000)
regsp=gnm(D~bs(subbase$A,knots=knotsA,Boundary.knots=range(subbase$A),degre=3)+
Mult(bs(subbase$A,knots=knotsA,Boundary.knots=range(subbase$A),degre=3),
 bs(subbase$Y,knots=knotsY,Boundary.knots=range(subbase$Y),degre=3))+
offset(log(E)),data=subbase, family=quasipoisson) 
BpA=bs(seq(0,99),knots=knotsA,Boundary.knots=range(subbase$A),degre=3) 
BpY=bs(seq(min(subbase$Y),max(subbase$Y)),knots=knotsY,Boundary.knots= range(subbase$Y),degre=3) 
predmodel=function(a,y) 
predict(regsp,newdata=data.frame(A=a,Y=y,E=1)) v
Z=outer(vX,vY,predmodel) 
persp(vZ,theta=-30,col="green",shade=TRUE,xlab="Ages (0-100)", 
ylab="Years (1900-2005)",zlab="Mortality rate (log)")

The mortality surface is now

and again, it is possible to extract the average mortality rate, as a function of the age, over the years,

BpA=bs(seq(0,99),knots=knotsA,Boundary.knots=range(subbase$A),degre=3)
Ax=BpA%*%coefficients(regsp)[2:8]
plot(seq(0,99),Ax,col="red",lwd=3,type="l")

We can then play with the smoothing parameters of the spline functions, and see the impact on the mortality surface

knotsA=seq(5,95,by=5)
knotsY=seq(1910,2000,by=10)
regsp=gnm(D~bs(A,knots=knotsA,Boundary.knots=range(subbase$A),degre=3)+
Mult(bs(A,knots=knotsA,Boundary.knots=range(subbase$A),degre=3),
bs(Y,knots=knotsY,Boundary.knots=range(subbase$Y),degre=3))
+offset(log(E)),data=subbase,family=quasipoisson)
predmodel=function(a,y) predict(regsp,newdata=data.frame(A=a,Y=y,E=1))
vZ=outer(vX,vY,predmodel)
persp(vZ,theta=-30,col="green",shade=TRUE,xlab="Ages (0-100)",
ylab="Years (1900-2005)",zlab="Mortality rate (log)")

We now have to use those functions our our small data sample ! That should be fun….

Pricing Reinsurance Contracts

In order to illustrate the next section of the non-life insurance course, consider the following example1, inspired from http://sciencepolicy.colorado.edu/…. This is the so-called “Normalized Hurricane Damages in the United States” dataset, for the period 1900-2005, from Pielke et al. (2008). The dataset is available in xls format, so we have to spend some time to import it,

> library(gdata)
> db=read.xls(
+ "http://sciencepolicy.colorado.edu/publications/special/public_data_may_2007.xls",
+ sheet=1)
trying URL 'http://sciencepolicy.colorado.edu/publications/special/public_data_may_2007.xls'

Content type 'application/vnd.ms-excel' length 119296 bytes (116 Kb)
opened URL
==================================================
downloaded 116 Kb

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = "fr_CA:fr",
	LC_ALL = (unset),
	LANG = "fr_CA.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

The problem with excel spreadsheets is that some columns might have pre-specified format (here, losses are with a format 000,000,000 for instance)

> tail(db)
    Year Hurricane.Description State Category Base.Economic.Damage
202 2005                 Cindy    LA        1          320,000,000
203 2005                Dennis    FL        3        2,230,000,000
204 2005               Katrina LA,MS        3       81,000,000,000
205 2005               Ophelia    NC        1        1,600,000,000
206 2005                  Rita    TX        3       10,000,000,000
207 2005                 Wilma    FL        3       20,600,000,000
    Normalized.PL05 Normalized.CL05  X X.1
202     320,000,000     320,000,000 NA  NA
203   2,230,000,000   2,230,000,000 NA  NA
204  81,000,000,000  81,000,000,000 NA  NA
205   1,600,000,000   1,600,000,000 NA  NA
206  10,000,000,000  10,000,000,000 NA  NA
207  20,600,000,000  20,600,000,000 NA  NA

To get data in a format we can play with, consider the following function,

> stupidcomma = function(x){
+ x=as.character(x)
+ for(i in 1:10){x=sub(",","",as.character(x))}
+ return(as.numeric(x))}

and let’s convert those values into numbers,

> base=db[,1:4]
> base$Base.Economic.Damage=Vectorize(stupidcomma)(db$Base.Economic.Damage)
> base$Normalized.PL05=Vectorize(stupidcomma)(db$Normalized.PL05)
> base$Normalized.CL05=Vectorize(stupidcomma)(db$Normalized.CL05)

Here is the dataset we will use, from now on,

> tail(base)
    Year Hurricane.Description State Category Base.Economic.Damage
202 2005                 Cindy    LA        1             3.20e+08
203 2005                Dennis    FL        3             2.23e+09
204 2005               Katrina LA,MS        3             8.10e+10
205 2005               Ophelia    NC        1             1.60e+09
206 2005                  Rita    TX        3             1.00e+10
207 2005                 Wilma    FL        3             2.06e+10
    Normalized.PL05 Normalized.CL05
202        3.20e+08        3.20e+08
203        2.23e+09        2.23e+09
204        8.10e+10        8.10e+10
205        1.60e+09        1.60e+09
206        1.00e+10        1.00e+10
207        2.06e+10        2.06e+10

We can visualize the normalized costs of hurricanes, from 1900 till 2005, with the 207 hurricanes (here the x-axis is not time, it is simply the index of the loss)

> plot(base$Normalized.PL05/1e9,type="h",ylim=c(0,155))

As usual, there are two components when computing the pure premium of an insurance contract. The number of claims (or here hurricanes) and the individual losses of each claim. We’ve seen – above – individual losses, let us focus now on the annual frequency.

> TB <- table(base$Year)
> years <- as.numeric(names(TB))
> counts <- as.numeric(TB)
> years0=(1900:2005)[which(!(1900:2005)%in%years)]
> db <- data.frame(years=c(years,years0),
+ counts=c(counts,rep(0,length(years0))))
> db[88:93,]
   years counts
88  2003      3
89  2004      6
90  2005      6
91  1902      0
92  1905      0
93  1907      0

On average, we experience about 2 (major) hurricanes per year,

> mean(db$counts)
[1] 1.95283

In predictive modeling (here, we wish to price a reinsurance contract for, say, 2014), we need probably to take into account some possible trend in the hurricane occurrence frequency. We can consider either a linear trend,

> reg0 <- glm(counts~years,data=db,family=poisson(link="identity"),
+ start=lm(counts~years,data=db)$coefficients)

or an exponential one,

> reg1 <- glm(counts~years,data=db,family=poisson(link="log"))

We can plot those three predictions, and get a prediction for the number of (major) hurricanes in 2014,

> plot(years,counts,type='h',ylim=c(0,6),xlim=c(1900,2020))
> cpred1=predict(reg1,newdata=data.frame(years=1890:2030),type="response")
> lines(1890:2030,cpred1,col="blue")
> cpred0=predict(reg0,newdata=data.frame(years=1890:2030),type="response")
> lines(1890:2030,cpred0,col="red")
> abline(h=mean(db$counts),col="black")
> (predictions=cbind(constant=mean(db$counts),linear=
+ cpred0[126],exponential=cpred1[126]))
    constant   linear exponential
126  1.95283 3.573999    4.379822
> points(rep((1890:2030)[126],3),prediction,col=c("black","red","blue"),pch=19)

Observe that changing the model will change the pure premium: with a flat prediction, we expect less than 2 (major) hurricanes, but with the exponential trend, we expect more than 4…

This is for the expected frequency. Now, we should find a suitable model to compute the pure premium of a reinsurance treaty, with a (high) deductible, and a limited (but large) cover. As we will seen in class next week, the appropriate model is a Pareto distribution (see Hagstrœm (1925), Huyghues-Beaufond (1991) or a survey – in French – published a few years ago).

We can use Hill’s plot to estimate the tail index,

http://freakonometrics.blog.free.fr/public/perso5/hill02.gif

> library(evir)
> hill(base$Normalized.PL05)

Clearly, costs of major hurricanes are heavy tailed.

Now, consider an insurance company, in the U.S., with 5% market share (just to illustrate). We will consider there \tilde Y_i= Y_i/20. The losses are given below. Consider a reinsurance treaty, with a deductible of 2 (billion) and a limited cover of 4 (billion),

For our Pareto model, consider only losses above 500 millions,

> threshold=.5
> (gpd.PL <- gpd(base$Normalized.PL05/1e9/20,threshold)$par.ests)
       xi      beta 
0.4424669 0.6705315

Keep in mind the 1 hurricane out of 8 reaches that level

> mean(base$Normalized.CL05/1e9/20>.5)
[1] 0.1256039

Given that the loss exceeds 500 millions, we can now compute the expected value of the reinsurance contact,

To compute it we can use

> E <- function(yinf,ysup,xi,beta){
+   as.numeric(integrate(function(x) (x-yinf)*dgpd(x,xi,mu=threshold,beta),
+   lower=yinf,upper=ysup)$value+
+   (1-pgpd(ysup,xi,mu=threshold,beta))*(ysup-yinf))
+ }

[Nov 5th] there is a typo in the previous function, since the threshold should be used, here, as a parameter in the function, if you want to play with that function an see the impact of the threshold (see a more recent post on the same topic, but a different dataset)… but here, we do not change the threshold, so it is not a big deal.

Now, it is probably time to bring all the pieces together. We might expect a bit less than 2 (major) hurricanes per year,

> predictions[1]
[1] 1.95283

and each hurricane has 12.5% chances to cost more than 500 million for our insurance company,

> mean(base$Normalized.PL05/1e9/20>.5)
[1] 0.1256039

and given that an hurricane exceeds 500 million loss, then the expected repayment by the reinsurance company is (in millions)

> E(2,6,gpd.PL[1],gpd.PL[2])*1e3
[1] 330.9865

So the pure premium of the reinsurance contract is simply

> predictions[1]*mean(base$Normalized.PL05/1e9/20>.5)*
+ E(2,6,gpd.PL[1],gpd.PL[2])*1e3
[1] 81.18538

for a cover of 4 billion, in excess of 2.

1.This example will be found in the Reinsurance and Extremal Events chapter in the forthcoming Computational Actuarial Science with R, by Eric Gilleland and Mathieu Ribatet.

GLM, non-linearity and heteroscedasticity

Last week in the non-life insurance course, we’ve seen the theory of the Generalized Linear Models, emphasizing the two important components

  • the link function (which is actually the key component in predictive modeling)
  • the distribution, or the variance function

Just to illustrate, consider my favorite dataset

­lin.mod = lm(dist~speed,data=cars)

A linear model means here Y_i=\beta_0+\beta_1X_i+\varepsilon_i

where the residuals are assumed to be centered, independent, and with identical variance. If we visualize that linear regression, we usually see something like that

The idea here (in GLMs) is to assume Y\vertX=x\sim\mathcal{N}(\beta_0+\beta_1x,\sigma^2)

which will produce the same model as the one describe previously, based on some error term. That model can be visualized below,

attach(cars)
n=2
X= cars$speed 
Y=cars$dist
df=data.frame(X,Y)
vX=seq(min(X)-2,max(X)+2,length=n)
vY=seq(min(Y)-15,max(Y)+15,length=n)
mat=persp(vX,vY,matrix(0,n,n),zlim=c(0,.1),theta=-30,ticktype ="detailed", box = FALSE)
reggig=glm(Y~X,data=df,family=gaussian(link="identity"))
x=seq(min(X),max(X),length=501)
C=trans3d(x,predict(reggig,newdata=data.frame(X=x),type="response"),rep(0,length(x)),mat)
lines(C,lwd=2)
sdgig=sqrt(summary(reggig)$dispersion)
x=seq(min(X),max(X),length=501)
y1=qnorm(.95,predict(reggig,newdata=data.frame(X=x),type="response"), sdgig)
C=trans3d(x,y1,rep(0,length(x)),mat)
lines(C,lty=2)
y2=qnorm(.05,predict(reggig,newdata=data.frame(X=x),type="response"), sdgig)
C=trans3d(x,y2,rep(0,length(x)),mat)
lines(C,lty=2)
C=trans3d(c(x,rev(x)),c(y1,rev(y2)),rep(0,2*length(x)),mat)
polygon(C,border=NA,col="yellow")
C=trans3d(X,Y,rep(0,length(X)),mat)
points(C,pch=19,col="red")
n=8
vX=seq(min(X),max(X),length=n)
mgig=predict(reggig,newdata=data.frame(X=vX))
sdgig=sqrt(summary(reggig)$dispersion)
for(j in n:1){
stp=251
x=rep(vX[j],stp)
y=seq(min(min(Y)-15,qnorm(.05,predict(reggig,newdata=data.frame(X=vX[j]),type="response"), sdgig)),max(Y)+15,length=stp)
z0=rep(0,stp)
z=dnorm(y, mgig[j], sdgig)
C=trans3d(c(x,x),c(y,rev(y)),c(z,z0),mat)
polygon(C,border=NA,col="light blue",density=40)
C=trans3d(x,y,z0,mat)
lines(C,lty=2)
C=trans3d(x,y,z,mat)
lines(C,col="blue")}

We do have two parts here: the linear increase of the average, \mathbb{E}(Y\vert X=x)=\beta_0+\beta_1x and the constant variance of the normal distribution \text{Var}(Y\vert X=x)=\sigma^2.

On the other hand, if we assume a Poisson regression,

poisson.reg = glm(dist~speed,data=cars,family=poisson(link="log"))

we have something like

This time, two things have changed simultaneously: our model is no longer linear, it is an exponential one \mathbb{E}(Y\vert X=x)=e^{\beta_0+\beta_1x}, and the variance is also increasing with the explanatory variable \text{Var}(Y\vert X=x)=e^{\beta_0+\beta_1x}, since with a Poisson regression,
Y\vert X=x\sim\mathcal{P}(e^{\beta_0+\beta_1x})

If we adapt the previous code, we get

The problem is that we changed two things when we introduced the Poisson regression from the linear model. So let us look at what happens when we change the two components independently. First, we can change the link function, with a Gaussian model but this time a multiplicative model (with a logarithm link function)

gaussian.reg = glm(dist~speed,data=cars,family=gaussian(link="log"))

which is still, here, an homoscedasctic model, but this time non-linear. Or we can change the link function in the Poisson regression, to get a linear model, but heteroscedastic

poisson.lin = glm(dist~speed,data=cars,family=poisson(link="identity"))

So this is basically what GLMs are about….

Modélisation des coûts individuels

Cette semaine, même si le réseau de l’UQAM est down, on va continuer le cours et finir la section sur la modélisation de la surdispersion pour la fréquence de sinistres. On devrait ensuite commencer la modélisation des coûts individuels. En particulier, on passera du temps autour de deux points,

  • la distinction lognormale et gamma
  • l’écrêtement des gros sinistres

Les transparents sont en ligne. Et la base des coûts est celle évoquée au second cours.

Surdispersion et comptage

Cette semaine, au cours d’assurance non-vie, on abordera la surdispersion, qui clôturera la partie du cours sur la modélisation de la fréquence de sinistres. Les transparents sont en ligne. Mais avant de parler de surdispersion, on finira la présentation des GLM. Je mets un lien vers le chapitre 15 du livre de John Fox Applied regression analysis and generalized linear models ainsi que le livre de James K. Lindsey Applying Generalized Linear Models. Je voudrais aussi renvoyer vers les notes de cours de Germán Rodríguez, avec des notes sur la régression de Poisson (avec un petit complément sur la notion de overdispersion).

Les instructions pour le second devoir seront envoyées par courriel.

Regression on variables, or on categories?

I admit it, the title sounds weird. The problem I want to address this evening is related to the use of the stepwise procedure on a regression model, and to discuss the use of categorical variables (and possible misinterpreations). Consider the following dataset

> db = read.table("http://freakonometrics.free.fr/db2.txt",header=TRUE,sep=";")

First, let us change the reference in our categorical variable  (just to get an easier interpretation later on)

> db$X3=relevel(as.factor(db$X3),ref="E")

If we run a logistic regression on the three variables (two continuous, one categorical), we get

> reg=glm(Y~X1+X2+X3,family=binomial,data=db)
> summary(reg)

Call:
glm(formula = Y ~ X1 + X2 + X3, family = binomial, data = db)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-3.0758   0.1226   0.2805   0.4798   2.0345  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -5.39528    0.86649  -6.227 4.77e-10 ***
X1           0.51618    0.09163   5.633 1.77e-08 ***
X2           0.24665    0.05911   4.173 3.01e-05 ***
X3A         -0.09142    0.32970  -0.277   0.7816    
X3B         -0.10558    0.32526  -0.325   0.7455    
X3C          0.63829    0.37838   1.687   0.0916 .  
X3D         -0.02776    0.33070  -0.084   0.9331    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 806.29  on 999  degrees of freedom
Residual deviance: 582.29  on 993  degrees of freedom
AIC: 596.29

Number of Fisher Scoring iterations: 6

Now, if we use a stepwise procedure, to select variables in the model, we get

> step(reg)
Start:  AIC=596.29
Y ~ X1 + X2 + X3

       Df Deviance    AIC
- X3    4   587.81 593.81
<none>      582.29 596.29
- X2    1   600.56 612.56
- X1    1   617.25 629.25

Step:  AIC=593.81
Y ~ X1 + X2

       Df Deviance    AIC
<none>      587.81 593.81
- X2    1   606.90 610.90
- X1    1   622.44 626.44

So clearly, we should remove the categorical variable if our starting point was the regression on the three variables.

Now, what if we consider the same model, but slightly different: on the five categories,

> X3complete = model.matrix(~0+X3,data=db)
> db2 = data.frame(db,X3complete)
> head(db2)
  Y       X1       X2 X3 X3A X3B X3C X3D X3E
1 1 3.297569 16.25411  B   0   1   0   0   0
2 1 6.418031 18.45130  D   0   0   0   1   0
3 1 5.279068 16.61806  B   0   1   0   0   0
4 1 5.539834 19.72158  C   0   0   1   0   0
5 1 4.123464 18.38634  C   0   0   1   0   0
6 1 7.778443 19.58338  C   0   0   1   0   0

From a technical point of view, it is exactly the same as before, if we look at the regression,

> reg = glm(Y~X1+X2+X3A+X3B+X3C+X3D+X3E,family=binomial,data=db2)
> summary(reg)

Call:
glm(formula = Y ~ X1 + X2 + X3A + X3B + X3C + X3D + X3E, family = binomial, 
    data = db2)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-3.0758   0.1226   0.2805   0.4798   2.0345  

Coefficients: (1 not defined because of singularities)
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -5.39528    0.86649  -6.227 4.77e-10 ***
X1           0.51618    0.09163   5.633 1.77e-08 ***
X2           0.24665    0.05911   4.173 3.01e-05 ***
X3A         -0.09142    0.32970  -0.277   0.7816    
X3B         -0.10558    0.32526  -0.325   0.7455    
X3C          0.63829    0.37838   1.687   0.0916 .  
X3D         -0.02776    0.33070  -0.084   0.9331    
X3E               NA         NA      NA       NA    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 806.29  on 999  degrees of freedom
Residual deviance: 582.29  on 993  degrees of freedom
AIC: 596.29

Number of Fisher Scoring iterations: 6

Both regressions are equivalent. Now, what about a stepwise selection on this new model?

> step(reg)
Start:  AIC=596.29
Y ~ X1 + X2 + X3A + X3B + X3C + X3D + X3E

Step:  AIC=596.29
Y ~ X1 + X2 + X3A + X3B + X3C + X3D

       Df Deviance    AIC
- X3D   1   582.30 594.30
- X3A   1   582.37 594.37
- X3B   1   582.40 594.40
<none>      582.29 596.29
- X3C   1   585.21 597.21
- X2    1   600.56 612.56
- X1    1   617.25 629.25

Step:  AIC=594.3
Y ~ X1 + X2 + X3A + X3B + X3C

       Df Deviance    AIC
- X3A   1   582.38 592.38
- X3B   1   582.41 592.41
<none>      582.30 594.30
- X3C   1   586.30 596.30
- X2    1   600.58 610.58
- X1    1   617.27 627.27

Step:  AIC=592.38
Y ~ X1 + X2 + X3B + X3C

       Df Deviance    AIC
- X3B   1   582.44 590.44
<none>      582.38 592.38
- X3C   1   587.20 595.20
- X2    1   600.59 608.59
- X1    1   617.64 625.64

Step:  AIC=590.44
Y ~ X1 + X2 + X3C

       Df Deviance    AIC
<none>      582.44 590.44
- X3C   1   587.81 593.81
- X2    1   600.73 606.73
- X1    1   617.66 623.66

What do we get now? This time, the stepwise procedure recommends that we keep one category (namely C). So my point is simple: when running a stepwise procedure with factors, either we keep the factor as it is, or we drop it. If it is necessary to change the design, by pooling together some categories, and we forgot to do it, then it will be suggested to remove that variable, because having 4 categories meaning the same thing will cost us too much if we use the Akaike criteria. Because this is exactly what happens here

> library(car)
> reg = glm(formula = Y ~ X1 + X2 + X3, family = binomial, data = db)
> linearHypothesis(reg,c("X3A=X3B","X3A=X3D","X3A=0"))
Linear hypothesis test

Hypothesis:
X3A - X3B = 0
X3A - X3D = 0
X3A = 0

Model 1: restricted model
Model 2: Y ~ X1 + X2 + X3

  Res.Df Df  Chisq Pr(>Chisq)
1    996                     
2    993  3 0.1446      0.986

So here, we should pool together categories A, B, D and E (which was here the reference). As mentioned in a previous post, it is necessary to pool together categories that should be pulled together as soon as possible. If not, the stepwise procedure might yield to some misinterpretations.

Régression de Poisson

Mercredi, on finira les arbres de classification, et on commencera la modélisation de la fréquence de sinistre. Les transparents sont en ligne.

Comme annoncé lors du premier cours, je suggère de commencer la lecture du Practicionner’s Guide to Generalized Linear Models. Le document correspond au minimum attendu dans ce cours.