Tag Archives: GLM

Classification with Categorical Variables (the fuzzy side)

The Gaussian and the (log) Poisson regressions share a very interesting property,

i.e. the average predicted value is the empirical mean of our sample.

> mean(predict(lm(dist~speed,data=cars)))
[1] 42.98
> mean(cars$dist)
[1] 42.98

One can prove that it is also the prediction for the average individual in our sample

> predict(lm(dist~speed,data=cars),
+ newdata=data.frame(speed=mean(cars$speed))) 
42.98

The geometric interpretation is that the regression line passes through the centroid,

> plot(cars)
> abline(lm(dist~speed,data=cars),col="red")
> abline(h=mean(cars$dist),col="blue")
> abline(v=mean(cars$speed),col="blue")
> points(mean(cars$speed),mean(cars$dist))

But in all other cases, it is no longer the case. Consider for instance the case of a logistic regression. And to ask for something even more complicated, consider the case where we have only categorical explanatory variables. In that context, it is more difficult to get a prediction for the “average individual”. Unless we consider some fuzzy interpretation of the regression.

Continue reading Classification with Categorical Variables (the fuzzy side)

Regression Models, It’s Not Only About Interpretation

Yesterday, I did upload a post where I tried to show that “standard” regression models where not performing bad. At least if you include splines (multivariate splines) to take into accound joint effects, and nonlinearities. So far, I do not discuss the possible high number of features (but with boostrap procedures, it is possible to assess something related to variable importance, that people from machine learning like).

But my post was not complete: I was simply plotting the prediction obtained by some model. And it “looked like” the regression was nice, but so were the random forrest, the https://latex.codecogs.com/gif.latex?k-nearest neighbour and boosting algorithm. What if we compare those models on new data?

Continue reading Regression Models, It’s Not Only About Interpretation

On Some Alternatives to Regression Models

When you start discussing with people in machine learning, you quickly hear something like “forget your econometric models, your GLMs, I can easily find a machine learning ‘model’ that can beat yours”. I am usually very sceptical, especially when I hear “easily” or “always“. I have no problem about the fact that I use old econometric models, but I had the feeling that things aren’t that easy. I can understand that we might have problems when we do have a lot of features (I am still working on that, I’ll get back to this point soon), but I have the feeling that I can still capture interactions, and non-linearities with standard econometric models as well as any machine learning algorithm.

Just to illustrate, consider the following ‘model

https://latex.codecogs.com/gif.latex?\mathbb{E}[Y\vert\boldsymbol{X}=\boldsymbol{x}]=m(\boldsymbol{x})

where https://latex.codecogs.com/gif.latex?m(\cdot) is (just to illustrate)

> n <- 5000
> rtf <- function(x1, x2) { sin(x1+x2)/(x1+x2) }
> xgrid <- seq(1,6,length=31)
> ygrid <- seq(1,6,length=31)
> zgrid <- outer(xgrid,ygrid,rtf)
> persp(xgrid,ygrid,zgrid,theta=30, phi=30, 
+ col="green", ticktype="detailed",shade=TRUE)

Continue reading On Some Alternatives to Regression Models

Visualising a Classification in High Dimension

So far, when discussing classification, we’ve been playing on my toy-dataset (actually, I should no claim it’s mine, it is inspired by the one used in the introduction of Boosting, by Robert Schapire and Yoav Freund). But in ral life, there are more observations, and more explanatory variables.With more than two explanatory variables, it starts to be more complicated to visualise. For instance, consider

MYOCARDE=read.table(
"http://freakonometrics.free.fr/saporta.csv",
head=TRUE,sep=";")

where we have observations from people in E.R., for infarctus, and we want to understand who did survive, to get a predictive model. But before running some classifier, let us visualise our data. Since we have seven explanatory variables and our class (survival or death), we can go for a PCA.

library(FactoMineR) # ACP (sur les var continues)
X=MYOCARDE[,1:7]
acp=PCA(X)

To add the death/survival variable, treat it as numerical 0/1 variable (at least to get a direction)

MYOCARDE2=MYOCARDE
MYOCARDE2$PRONO=(MYOCARDE2$PRONO=="SURVIE")*1
acp=PCA(MYOCARDE2,quanti.sup=8,graph=TRUE)

The nice thing is that we see here where variables are colinear with that one. It is also possible to visualise individuals, and classes, too

acp=PCA(MYOCARDE,quali.sup=8,graph=TRUE)
plot(acp, habillage = 8,col.hab=c("red","blue"))

Continue reading Visualising a Classification in High Dimension

Supervised Classification, Logistic and Multinomial

We will start, in our Data Science course,  to discuss classification techniques (in the context of supervised models). Consider the following case, with 10 points, and two classes (red and blue)

> clr1 <- c(rgb(1,0,0,1),rgb(0,0,1,1))
> clr2 <- c(rgb(1,0,0,.2),rgb(0,0,1,.2))
> x <- c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
> y <- c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
> z <- c(1,1,1,1,1,0,0,1,0,0)
> df <- data.frame(x,y,z)
> plot(x,y,pch=19,cex=2,col=clr1[z+1])

To get a prediction, i.e. a partition of the space in two parts, consider some logistic regression

> reg=glm(z~x+y,data=df,family=binomial)
> summary(reg)
 
Call:
glm(formula = z ~ x + y, family = binomial, data = df)
 
Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.6593  -0.4400   0.2564   0.5830   1.5374  
 
Coefficients:
            Estimate Std. Error z value Pr(>|z|)
(Intercept)   -1.706      1.999  -0.854    0.393
x             -5.489      5.360  -1.024    0.306
y              8.568      5.515   1.554    0.120
 
(Dispersion parameter for binomial family taken to be 1)
 
    Null deviance: 13.4602  on 9  degrees of freedom
Residual deviance:  8.1445  on 7  degrees of freedom
AIC: 14.144
 
Number of Fisher Scoring iterations: 5

Given some point, the predicted class is obtained using

> pred_1 <- function(x,y){
+ predict(reg,newdata=data.frame(x=x,
+ y=y),type="response")>.5
+ }

(here, the predicted class is simply the one that is the most likely). To visualize it use

> x_grid<-seq(0,1,length=101)
> y_grid<-seq(0,1,length=101)
> z_grid <- outer(x_grid,y_grid,pred_1)
> image(x_grid,y_grid,z_grid,col=clr2)
> points(x,y,pch=19,cex=2,col=clr1[z+1])

Since the logistic regression is a (generalized) linear model, the line that separate the two regions is a straight line.

Continue reading Supervised Classification, Logistic and Multinomial

Binomial regression model

Most of the time, when we introduce binomial models, such as the logistic or probit models, we discuss only Bernoulli variables, . This year (actually also the year before), I discuss extensions to multinomial regressionswhere  is a function on some simplex. The multinomial logistic model was mention here. The idea is to consider, for instance with three possible classes

the following model

and

Continue reading Binomial regression model

Predictive Modeling

Tomorrow, around noon, I will be giving a talk on predictive modeling for actuaries. In the introduction, I will get back shortly on the idea that a prediction is usually a best estimate, in the sense of getting an expected value. And because

https://latex.codecogs.com/gif.latex?\mathbb{E}(X)=\underset{c\in\mathbb{R}}{\text{argmin}}\{\mathbb{E}\left([X-c]^2\right)\}=\underset{c\in\mathbb{R}}{\text{argmin}}\{\mathbb{E}\left(||X-c||_{L_2}\right)\}

it is natural to use least square ideas. In order to illustrate all those concepts, we will use a simple dataset, with the sex, the height and the weight of a person, as well as declared weight.

Davis=read.table(
"http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")

Since there is a typo in this dataset, we have to invert to figures

Davis[12,c(2,3)]=Davis[12,c(3,2)]

but it’s not a big deal. The variable of interest, here, is someone’s weight

attach(Davis)
Y=weight*2.204622

(here in pounds). We will use explanatory variables such as the sex of that person, or his/her height

X=Davis$height / 30.48

(in inches). So, we will start with the (standard) linear model, just to make sure that we all talk about the same thing.

The goal will be to use (possible) explanatory variable to improve our prediction. We will start with the standard linear model, but we will see that nonlinear models can also easily be obtained,

Non linearities will be discussed. But those models are Gaussian (as mentioned above). And homoscedastic. So we will see how generalized linear models can be used to model the mean and the variance, at the same time. For instance, with a Poisson regression (below), the variance will increase with the expected value.

After this general introduction, we will spend some time on 0-1 variables. We will see how to use a logistic regression, and also discuss more generally which kind of models can be used for classification. ROC curves will be presented, and explained.

Then, we will also see an alternative to the logistic model, namely classification trees and CART techniques

We will also discuss random forrests, bagging and boosting techniques

pdf version of the slides can be downloaded.

SOA Webinar on Predictive Modeling

I will give, with Qichun Xu, a joint webinar for the Reinsurance Council and the Futurism Council of the Society of Actuaries, on Perspectives of Predictive Modeling with Case Studies in a few days. The slides of my talk are now available (I do recommand to open the pdf version of the slides with Acrobat, since there are animated pictures in the slides that could not be visualized below for instance). The Society of Actuaries asked specifically for a powerpoint document, so I will use screenshots of the slides for the webinar. I do encourage to open and read the pdf file for a better quality… Sorry for the inconvenience. I will upload soon lines of codes to reproduce most of the graphs. All comments and remarks are welcome.

More significant? so what…

Following my non-life insurance class, this morning, I had an interesting question from a student, that I will try to illustrate, and reformulate as accurately as possible. Consider a simple regression model, with one variable of interest, and one possible explanatory variable. Assume that we have two possible models, with the following output (yes, I do hide interesting parts here, but it is to get quickly to my student’s point)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92883    0.06391  14.534   <2e-16 ***
X           -0.12499    0.06108  -2.046   0.0421 *  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

for the first model – a GLM with some distribution, and some link function – and

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92901    0.06270  14.817   <2e-16 ***
X           -0.09883    0.05816  -1.699   0.0909 .  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

for the second one – with another GLM, with another distribution, but the same link function (I guess I could have changed it, but it does not really matter here). Then, I got the following statement “I would like to choose the first model because the explanatory variable is more significant, and therefore, this model should have a stronger predictive power“.

That’s a nice idea, isn’t it ? Actually, I guess this is why I love teaching, because I will never be able to think about such an idea by myself. Because when you look at that statement, somehow it could make sense. Except that from my point of view, it is not valid at all. My first thought was to recall is standard example in statistical inference : you cannot not claim that a distribution is better than another one just by looking at the parameter estimates.

> fitdistr(Y,"normal")
      mean          sd    
  0.93685011   0.90700830 
 (0.06413517) (0.04535042)
> fitdistr(Y,"exponential")
      rate   
  1.06740661 
 (0.07547704)

Can I claim that the Gaussian distribution is better than the exponential one because parameter estimates have smaller standard deviation ? Because somehow, this is what we did when we claimed previously that the first model was better than the second one.

Let me get back on the outputs of the two regressions, and let me explain what I did. Actually, I wanted to have a story close to the one on the Gaussian versus exponential fit. So I did generate some exponential random variable,

> set.seed(5)
> n=200
> U=runif(n); 
> Y=-log(U)

Here, we can visualize the histogram of this sample, as well as the the estimated exponential distribution

> hist(Y,proba=TRUE,col="light green",border="white",lwd=2,breaks=seq(0,5.3333333333333,by=.333333333))
> x=seq(0,6,by=.02)
> lines(x,dexp(x,1/mean(Y)),col="red",lty=2)

On top of that, let us fit a gamma distribution. Using a GLM (where the regression is here on a constant – only), just to practice because later on, we will use a gamma regression on that variable

> reg0=glm(Y~1,family=Gamma(link="identity"))
> a=reg0$coefficient
> b=summary(reg0)$dispersion
> lines(x,dgamma(x,shape=1/b,scale=a*b),col="blue")

Now, we need a covariate, to run some regressions. What I wanted is some variable slightly correlated with our previous variable. Slightly, just to make sure that our -value in the regression will be close to 5% or 10%. So here, I did generate a variable so that the pair has Clayton copula, with coefficient 0.1 (which is small, extremely small)

> a=.1
> set.seed(5)
> n=200
> U=runif(n); 
> V=(U^(-a)*(runif(n)^(-a/(1+a))-1)+1)^(-1/a)
> Y=-log(U)
> X=qnorm(V)

To visualize the copula of the variables, we can use

> cop=function(u,v){
+ (a+1)*(u*v)^(-(a+1))*
+ (u^(-a)+v^(-a)-1)^(-(2*a+1)/a) }
> x=y=seq(.05,.95,by=.05)
> z=outer(x,y,cop)
> mat=persp(x,y,z,col="green",shade=TRUE,xlim=c(0,1),ylim=c(0,1),zlim=c(0,2),theta=-30,
+ ticktype ="detailed",zlab="")

We should be not far away from the independence (actually, there is a negative – significant – correlation (Pearson’s correlation)). Now, consider two models,

  • a Gaussian model (here a standard linear model)
  • a gamma model, with a linear link function

The outputs are the following (you will recognize the outputs given previously)

> reg1=lm(Y~X)
> reg2=glm(Y~X,family=Gamma(link="identity"))
> summary(reg1)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92883    0.06391  14.534   <2e-16 ***
X           -0.12499    0.06108  -2.046   0.0421 *  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.9021 on 198 degrees of freedom
Multiple R-squared:  0.02071,	Adjusted R-squared:  0.01576 
F-statistic: 4.187 on 1 and 198 DF,  p-value: 0.04206

> summary(reg2)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92901    0.06270  14.817   <2e-16 ***
X           -0.09883    0.05816  -1.699   0.0909 .  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for Gamma family taken to be 0.9086447)

    Null deviance: 229.72  on 199  degrees of freedom
Residual deviance: 226.58  on 198  degrees of freedom
AIC: 379.22

Number of Fisher Scoring iterations: 10

And here are the two predictions,

So, which model should we use? As usual, my answer will be “let’s have a look at the data” instead of looking only at tables of figures. Using some code posted a few days ago, let us visualize the two regressions. The Gaussian model is here

(for the lower part, I do not go below 0 since we do have, here, a positive variable that we would like to model) while the gamma on is here

And if we believe that the explanatory variable has no predictive power (since we can claim that the parameter is not significant in the regression), and we remove it from the regression, we get

Here, I do believe that the gamma (not to say the exponential) model is better because it is clearly more coherent with properties of the variable of interest. I trust more the confidence interval obtained above on the gamma model, than the one obtained with a Gaussian distribution. Even if the parameter in the regression is “more significant”.

GLM, non-linearity and heteroscedasticity

Last week in the non-life insurance course, we’ve seen the theory of the Generalized Linear Models, emphasizing the two important components

  • the link function (which is actually the key component in predictive modeling)
  • the distribution, or the variance function

Just to illustrate, consider my favorite dataset

­lin.mod = lm(dist~speed,data=cars)

A linear model means here Y_i=\beta_0+\beta_1X_i+\varepsilon_i

where the residuals are assumed to be centered, independent, and with identical variance. If we visualize that linear regression, we usually see something like that

The idea here (in GLMs) is to assume Y\vertX=x\sim\mathcal{N}(\beta_0+\beta_1x,\sigma^2)

which will produce the same model as the one describe previously, based on some error term. That model can be visualized below,

attach(cars)
n=2
X= cars$speed 
Y=cars$dist
df=data.frame(X,Y)
vX=seq(min(X)-2,max(X)+2,length=n)
vY=seq(min(Y)-15,max(Y)+15,length=n)
mat=persp(vX,vY,matrix(0,n,n),zlim=c(0,.1),theta=-30,ticktype ="detailed", box = FALSE)
reggig=glm(Y~X,data=df,family=gaussian(link="identity"))
x=seq(min(X),max(X),length=501)
C=trans3d(x,predict(reggig,newdata=data.frame(X=x),type="response"),rep(0,length(x)),mat)
lines(C,lwd=2)
sdgig=sqrt(summary(reggig)$dispersion)
x=seq(min(X),max(X),length=501)
y1=qnorm(.95,predict(reggig,newdata=data.frame(X=x),type="response"), sdgig)
C=trans3d(x,y1,rep(0,length(x)),mat)
lines(C,lty=2)
y2=qnorm(.05,predict(reggig,newdata=data.frame(X=x),type="response"), sdgig)
C=trans3d(x,y2,rep(0,length(x)),mat)
lines(C,lty=2)
C=trans3d(c(x,rev(x)),c(y1,rev(y2)),rep(0,2*length(x)),mat)
polygon(C,border=NA,col="yellow")
C=trans3d(X,Y,rep(0,length(X)),mat)
points(C,pch=19,col="red")
n=8
vX=seq(min(X),max(X),length=n)
mgig=predict(reggig,newdata=data.frame(X=vX))
sdgig=sqrt(summary(reggig)$dispersion)
for(j in n:1){
stp=251
x=rep(vX[j],stp)
y=seq(min(min(Y)-15,qnorm(.05,predict(reggig,newdata=data.frame(X=vX[j]),type="response"), sdgig)),max(Y)+15,length=stp)
z0=rep(0,stp)
z=dnorm(y, mgig[j], sdgig)
C=trans3d(c(x,x),c(y,rev(y)),c(z,z0),mat)
polygon(C,border=NA,col="light blue",density=40)
C=trans3d(x,y,z0,mat)
lines(C,lty=2)
C=trans3d(x,y,z,mat)
lines(C,col="blue")}

We do have two parts here: the linear increase of the average, \mathbb{E}(Y\vert X=x)=\beta_0+\beta_1x and the constant variance of the normal distribution \text{Var}(Y\vert X=x)=\sigma^2.

On the other hand, if we assume a Poisson regression,

poisson.reg = glm(dist~speed,data=cars,family=poisson(link="log"))

we have something like

This time, two things have changed simultaneously: our model is no longer linear, it is an exponential one \mathbb{E}(Y\vert X=x)=e^{\beta_0+\beta_1x}, and the variance is also increasing with the explanatory variable \text{Var}(Y\vert X=x)=e^{\beta_0+\beta_1x}, since with a Poisson regression,
Y\vert X=x\sim\mathcal{P}(e^{\beta_0+\beta_1x})

If we adapt the previous code, we get

The problem is that we changed two things when we introduced the Poisson regression from the linear model. So let us look at what happens when we change the two components independently. First, we can change the link function, with a Gaussian model but this time a multiplicative model (with a logarithm link function)

gaussian.reg = glm(dist~speed,data=cars,family=gaussian(link="log"))

which is still, here, an homoscedasctic model, but this time non-linear. Or we can change the link function in the Poisson regression, to get a linear model, but heteroscedastic

poisson.lin = glm(dist~speed,data=cars,family=poisson(link="identity"))

So this is basically what GLMs are about….

Surdispersion et comptage

Cette semaine, au cours d’assurance non-vie, on abordera la surdispersion, qui clôturera la partie du cours sur la modélisation de la fréquence de sinistres. Les transparents sont en ligne. Mais avant de parler de surdispersion, on finira la présentation des GLM. Je mets un lien vers le chapitre 15 du livre de John Fox Applied regression analysis and generalized linear models ainsi que le livre de James K. Lindsey Applying Generalized Linear Models. Je voudrais aussi renvoyer vers les notes de cours de Germán Rodríguez, avec des notes sur la régression de Poisson (avec un petit complément sur la notion de overdispersion).

Les instructions pour le second devoir seront envoyées par courriel.

Earthquake dynamics

I just upload on http://hal.archives-ouvertes.fr/hal-00871883 a joint paper entitled Modeling earthquake dynamics.

In this paper, we investigate questions arising in Parsons & Geist (2012). Pseudo causal models connecting magnitudes and waiting times are consider, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos & Karlis (2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are function of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year, or a decade.

ROC curves and classification

To get back to a question asked after the last course (still on non-life insurance), I will spend some time to discuss ROC curve construction, and interpretation. Consider the dataset we’ve been using last week,

> db = read.table("http://freakonometrics.free.fr/db.txt",header=TRUE,sep=";")
> attach(db)

The first step is to get a model. For instance, a logistic regression, where some factors were merged together,

> X3bis=rep(NA,length(X3))
> X3bis[X3%in%c("A","C","D")]="ACD"
> X3bis[X3%in%c("B","E")]="BE"
> db$X3bis=as.factor(X3bis)
> reg=glm(Y~X1+X2+X3bis,family=binomial,data=db)

From this model, we can predict a probability, not a  variable,

> S=predict(reg,type="response")

Let https://latex.codecogs.com/gif.latex?\widehat{S} denote this variable (actually, we can use the score, or the predicted probability, it will not change the construction of our ROC curve). What if we really want to predict a  variable. As we usually do in decision theory. The idea is to consider a threshold https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-04.png, so that

  • if https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-05.png, then  https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-02.png will be https://latex.codecogs.com/gif.latex?1, or “positive” (using a standard terminology)
  • si https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-06.png, then  https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-02.png will be https://latex.codecogs.com/gif.latex?0, or “negative

Then we derive a contingency table, or a confusion matrix

     observed value https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-01.png
predicted
value
https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-02.png
“positive“ “négative“
“positive“ TP FP
“négative“ FN TN

where TP are the so-called true positive, TN  the true negative, FP are the false positive (or type I error) and FN are the false negative (type II errors). We can get that contingency table for a given threshold https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-04.png

> roc.curve=function(s,print=FALSE){
+ Ps=(S>s)*1
+ FP=sum((Ps==1)*(Y==0))/sum(Y==0)
+ TP=sum((Ps==1)*(Y==1))/sum(Y==1)
+ if(print==TRUE){
+ print(table(Observed=Y,Predicted=Ps))
+ }
+ vect=c(FP,TP)
+ names(vect)=c("FPR","TPR")
+ return(vect)
+ }
> threshold = 0.5
> roc.curve(threshold,print=TRUE)
        Predicted
Observed   0   1
       0   5 231
       1  19 745
      FPR       TPR 
0.9788136 0.9751309

Here, we also compute the false positive rates, and the true positive rates,

  • TPR = TP / P = TP / (TP + FN) also called sentivity, defined as the rate of true positive: probability to be predicted positve, given that someone is positive (true positive rate)
  • FPR = FP / N = FP / (FP + TN) is the rate of false positive: probability to be predicted positve, given that someone is negative (false positive rate)

The ROC curve is then obtained using severall values for the threshold. For convenience, define

> ROC.curve=Vectorize(roc.curve)

First, we can plot https://latex.codecogs.com/gif.latex?(\widehat{S}_i,Y_i) (a standard predicted versus observed graph), and visualize true and false positive and negative, using simple colors

> I=(((S>threshold)&(Y==0))|((S<=threshold)&(Y==1)))
> plot(S,Y,col=c("red","blue")[I+1],pch=19,cex=.7,,xlab="",ylab="")
> abline(v=threshold,col="gray")

And for the ROC curve, simply use

> M.ROC=ROC.curve(seq(0,1,by=.01))
> plot(M.ROC[1,],M.ROC[2,],col="grey",lwd=2,type="l")

This is the ROC curve. Now, to see why it can be interesting, we need a second model. Consider for instance a classification tree

> library(tree)
> ctr <- tree(Y~X1+X2+X3bis,data=db)
> plot(ctr)
> text(ctr)

To plot the ROC curve, we just need to use the prediction obtained using this second model,

> S=predict(ctr)

All the code described above can be used. Again, we can plot https://latex.codecogs.com/gif.latex?(\widehat{S}_i,Y_i) (observe that we have 5 possible values for https://latex.codecogs.com/gif.latex?\widehat{S}_i, which makes sense since we do have 5 leaves on our tree). Then, we can plot the ROC curve,

An interesting idea can be to plot the two ROC curves on the same graph, in order to compare the two models

> plot(M.ROC[1,],M.ROC[2,],type="l")
> lines(M.ROC.tree[1,],M.ROC.tree[2,],type="l",col="grey",lwd=2)

The most difficult part is to get a proper interpretation. The tree is not predicting well in the lower part of the curve. This concerns people with a very high predicted probability. If our interest is more on those with a probability lower than 90%, then, we have to admit that the tree is doing a good job, since the ROC curve is always higher, comparer with the logistic regression.

Exposure as a possible explanatory variable

Iin insurance pricing, the exposure is usually used as an offset variable to model claims frequency. As explained many times on this blog (e.g. here), and in my notes, if we have to identical drivers, but one with an exposure of 6 months, and the other one of one year, it should be natural to assume that, on average, the second driver will have two times more accidents. This is the motivation to use a standard (homogeneous) Poisson process to model claim frequency. One can also see here legal issue, since, in case of a (partial) reinbursement of a premium, it would be done prorata temporis. The risk is proportional to the exposure. Thus, if https://latex.codecogs.com/gif.latex?Y_i denote the number of claims of insured https://latex.codecogs.com/gif.latex?i, with characteristics https://latex.codecogs.com/gif.latex?\boldsymbol{X}_{i}=(X_{i,1},\cdots,X_{i,k}) and exposure https://latex.codecogs.com/gif.latex?E_i, with a Poisson regression, we would write

https://latex.codecogs.com/gif.latex?Y_i\sim\mathcal{P}(E_i\cdot%20\exp(\boldsymbol{X}_i%27\boldsymbol{\beta}))

or equivalently

https://latex.codecogs.com/gif.latex?Y_i\sim\mathcal{P}(\exp(\log(E_i)+\boldsymbol{X}_i%27\boldsymbol{\beta}))

From this expression, the logarithm of the exposure is an explanatory variable, but there should be no coefficient (the coefficient here is taken to be one). Can’t we use the exposure as an explanatory variable ? Will we get a unit parameter ?

Of course, in the context of ratemaking, it is probably not a relevant question, since actuaries are required to predict annual claim frequency (since insurance contract are supposed to provide a one year coverage). But it might be interesting to get a better understanding of why people might be leaving our portfolio (i.e. are cancelling their insurance policy before term, or not renew someday).

To be more specific and get a better understanding, consider the following model: consider a Poisson process to model claims arrival, and people dedicated to their insurance company (they never leave). Let us generate scenarios over twenty years

> n=983
> D1=as.Date("01/01/1993",'%d/%m/%Y')
> D2=as.Date("31/12/2013",'%d/%m/%Y')
> L=D1+0:(D2-D1)
> set.seed(1)
> arrival=sample(L,size=n,replace=TRUE)
> exposure=N=rep(NA,n)
> departure=rep(D2,n)
> set.seed(2)
> for(i in 1:n){
+   expo=D2-arrival[i]
+   w=0
+   while(max(w)<expo) w=c(w,max(w)+1+trunc(rexp(1,1/1000)))
+   exposure[i]=departure[i]-arrival[i]
+   N[i]=max(0,length(w)-2)}
> df=data.frame(N=N,E=exposure/365)

Here the expected time between claims is considered to be 1000 days. The (annual) intensity of the Poisson process is here

> 365/1000
[1] 0.365

so if we run a Poisson regression on the logarithm of the exposure (please feel free to had other covariates if you want, the example here is just to see what could happen when exposure is considered as a standard covariate), we should get a parameter close to

> log(365/1000)
[1] -1.007858

Here, the regression on a constant, with the offset variable is

> reg=glm(N~1+offset(log(E)),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ 1 + offset(log(E)), family = poisson, data = df)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-3.4145  -0.4673   0.2367   0.8770   3.6828  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -1.04233    0.02532  -41.17   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 1116.9  on 982  degrees of freedom
Residual deviance: 1116.9  on 982  degrees of freedom
AIC: 3282.9

Number of Fisher Scoring iterations: 5

which is consistent with what we just said. If we run the regression with the logarithm of the exposure as a possible explanatory variable, we would expect to have a coefficient close to 1. And indeed…

> reg=glm(N~log(E),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ log(E), family = poisson, data = df)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-3.0810  -0.8373  -0.1493   0.5676   3.9001  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -1.03350    0.08546  -12.09   <2e-16 ***
log(E)       1.00920    0.03292   30.66   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 2553.6  on 982  degrees of freedom
Residual deviance: 1064.2  on 981  degrees of freedom
AIC: 3762.7

Number of Fisher Scoring iterations: 5

If we keep the offset, and add the variable, we can see that it become useless (which is a test of a unit parameter, somehow)

> reg=glm(N~log(E)+offset(log(E)),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ log(E) + offset(log(E)), family = poisson, 
    data = df)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-3.0810  -0.8373  -0.1493   0.5676   3.9001  

Coefficients:
             Estimate Std. Error z value Pr(>|z|)    
(Intercept) -1.033503   0.085460 -12.093   <2e-16 ***
log(E)       0.009201   0.032920   0.279     0.78    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 1064.3  on 982  degrees of freedom
Residual deviance: 1064.2  on 981  degrees of freedom
AIC: 3762.7

Number of Fisher Scoring iterations: 5

Here, we do have pure Poisson processes, so exposure is crucial, since the parameter of the Poisson distribution is proportional to the exposure. But we cannot learn anything else from the exposure.

Consider some real data.

> head(baseFREQ)
  nocontrat exposition zone puissance agevehicule
1        27       0.87    C         7           0
2       115       0.72    D         5           0
3       121       0.05    C         6           0
4       142       0.90    C        10          10
5       155       0.12    C         7           0
6       186       0.83    C         5           0
  ageconducteur bonus marque carburant densite region nbre
1            56    50     12         D      93     13    0
2            45    50     12         E      54     13    0
3            37    55     12         D      11     13    0
4            42    50     12         D      93     13    0
5            59    50     12         E      73     13    0
6            75    50     12         E      42     13    0

What do we get if we consider a Poisson regression on the logarithm of the exposure ?

> reg=glm(nbre~log(exposition),data=baseFREQ,family=poisson)
> summary(reg)

Call:
glm(formula = nbre ~ log(exposition), family = poisson, data = baseFREQ)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.3988  -0.3388  -0.2786  -0.1981  12.9036  

Coefficients:
                Estimate Std. Error z value Pr(>|z|)    
(Intercept)     -2.83045    0.02822 -100.31   <2e-16 ***
log(exposition)  0.53950    0.02905   18.57   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 12931  on 49999  degrees of freedom
Residual deviance: 12475  on 49998  degrees of freedom
AIC: 16150

Number of Fisher Scoring iterations: 6

If we add the exposure to the offset, what’s happening ? (let us use a nonparametric transformation, so visualize what’s going on)

> library(gam)
> reg=gam(nbre~offset(log(exposition))+s(exposition),data=baseFREQ,family=poisson)
> plot(reg,se=TRUE)

There is a clear and significant effect. The more insured stay, the less likely they get a claim. Actually, it can be observed without running a regression.

> i1=which(baseFREQ$nbre>0)
> i0=which(baseFREQ$nbre==0)
> h1=hist(baseFREQ$exposition[i1],probability=TRUE)
> h0=hist(baseFREQ$exposition[i0],probability=TRUE)
> plot(h1$mids,h1$density,type='s',lwd=2,col="red")
> lines(h0$mids,h0$density,type='s',col='blue',lwd=2)

In blue, we have the density of the exposure for those who did not have claims, and in red, the density of those who did have one claim (or more)

So here, we cannot assume a unit value for the parameter. What does that mean ? Can we reproduce such a behavior ?

In order to get a better understandung, consider two possible behaviors for the insured. The first one will be the following : if the company does not offer substantial discounts after no several years with no claims, the insured might leave the company. For instance, if the insured has no claim during 5 years, then after 5 years, he will leave the company (to get a better price somewhere else, say). The code will be

> for(i in 1:n){
+   expo=D2-arrival[i]
+   w=c(0,0)
+   while((max(w)<expo) & (max(diff(w))<1500)) w=c(w,max(w)+trunc(rexp(1,1/1000)))
+   if(max(diff(w))>1500) departure[i]=arrival[i]+max(w[-length(w)])+1500
+   exposure[i]=departure[i]-arrival[i]
+   N[i]=max(0,length(w)-3)}
> df=data.frame(N=N,E=exposure/365)

Here, I consider 1500 days, instead of 5 years,, but it is the same idea. So, what do we have here ?

> reg=glm(N~log(E),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ log(E), family = poisson, data = df)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.5684  -0.9668  -0.2321   0.4244   3.6265  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -2.50844    0.10286  -24.39   <2e-16 ***
log(E)       1.65738    0.04494   36.88   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 2567.31  on 982  degrees of freedom
Residual deviance:  885.71  on 981  degrees of freedom
AIC: 2897.9

Here, the coefficient is (significantly) larger than 1. More precisely,

> reg=glm(N~log(E)+offset(log(E)),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ log(E) + offset(log(E)), family = poisson, 
    data = df)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.5684  -0.9668  -0.2321   0.4244   3.6265  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -2.50844    0.10286  -24.39   <2e-16 ***
log(E)       0.65738    0.04494   14.63   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 1114.24  on 982  degrees of freedom
Residual deviance:  885.71  on 981  degrees of freedom
AIC: 2897.9

There is clearly a bias here : people staying long are more like likely to have an accident. Which is consistent with our story, since clients with low risks left.

The second behavior will be the following : sometimes, the insured are not satisfied with the way claims are handled, and they might leave after the first claim. Consider the case where, after one claim, it is likely (e.g. with probability 50%) that the insured leaves the company. Instead of assuming that the insured did not like claims management, consider the case were the car is so damaged that he cannot drive it anymore. So it will be useless to pay an insurance premium. The code here will be

> for(i in 1:n){
+   expo=D2-arrival[i]
+   w=0
+   stay=TRUE
+   while((max(w)<expo) & (stay==TRUE)) { w=c(w,max(w)+trunc(rexp(1,1/1000)))
+   stay=sample(c(TRUE,FALSE),prob=c(.5,.5),size=1)}
+   N[i]=length(w)-2
+   if(stay==FALSE) {departure[i]=arrival[i]+max(w)
+   N[i]=length(w)-1}
+   exposure[i]=departure[i]-arrival[i]}
> df=data.frame(N=N,E=exposure/365)

Here, after each claim, the insured toss a coin to see if he cancels the contract, or not.

> reg=glm(N~log(E),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ log(E), family = poisson, data = df)

Deviance Residuals: 
     Min        1Q    Median        3Q       Max  
-2.28402  -0.47763  -0.08215   0.33819   2.37628  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept)  0.09920    0.04251   2.334   0.0196 *  
log(E)       0.30640    0.02511  12.203   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 666.92  on 982  degrees of freedom
Residual deviance: 498.29  on 981  degrees of freedom
AIC: 2666.3

This time, the parameter is (again significantly) smaller than one.

> reg=glm(N~log(E)+offset(log(E)),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ log(E) + offset(log(E)), family = poisson, 
    data = df)

Deviance Residuals: 
     Min        1Q    Median        3Q       Max  
-2.28402  -0.47763  -0.08215   0.33819   2.37628  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept)  0.09920    0.04251   2.334   0.0196 *  
log(E)      -0.69360    0.02511 -27.625   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 1116.87  on 982  degrees of freedom
Residual deviance:  498.29  on 981  degrees of freedom
AIC: 2666.3

The story is now rather different, since those who stay long should not have encountered a lot of opportunities to leave. So clearly, they did not have much claims. If someone has a long exposure, the negative sign in the output above means that he should not have much claims, on average.

As we can see, those models produce rather difference outputs. Note that it is possible much more interpretations. For instance, depending on the way data were extracted,

  • all policies observed, over those twenty years,
  • all policies in force at some specific date, until now
  • all policies in force at some specific date, until one year after
  • all policies in force now

So far, we have been using the first method, but the other ones will yield different interpretations, e.g. because of survivor bias. But that’s another story… And one can read Boucher and Denuit (2008) to go further.