Category Archives: Course

De l’abus de notation dans les modèles de régression

De manière un peu rituelle, je commence toujours mon cours de régression en revenant sur un point important de la statistique : les abus de notation !  Car tout le monde utilise les mêmes lettres (surtout les grecques) pour désigner des objets de nature différente. Dans la majorité des livres, on pourra nous dire sur la même page que \widehat{\theta}=2.35 et que \text{Var}(\widehat{\theta})=1.07, autrement dit \widehat{\theta} peut désigner en même temps un nombre (dans le premier cas) et une variable aléatoire (dans le second). C’est pour le moins déroutant ! En fait, la raison est assez simple. La statistique commence toujours par un échantillon \{y_1,y_2,\cdots,y_n\}, des données, des chiffres. Si on reste là, on fait du descriptif. L’étape classique est ensuite de supposer que les observations y_i sont des réalisations de variables aléatoires Y_i, qu’on supposera bien souvent indépendantes et identiquement distribuées. Et \widehat{\theta} sera alors une statistique, c’est à dire une fonction de mes observations. Je peux alors définir \widehat{\theta}=t(y_1,\cdots,y_n) comme étant la statistique observée sur mon échantillon, mais je peux aussi considérer \widehat{\theta}=t(Y_1,\cdots,Y_n), qui est alors une variable aléatoire, mais avec la même notation. Si on voulait aider à comprendre, on utiliserait \widehat{\Theta}, mais bon, les choses sont ce qu’elles sont… Et en économétrie, ça devient rapidement un cauchemar quand on commence à parler des résidus… Autre particularité en statistique, c’est que si on distingue l’espérance et la moyenne (empirique), on a un seul mot pour parler de la variance, que ce soit pour une variable aléatoire, ou un vecteur de \mathbb{R}^n. On aura ainsi \mathbb{E}[Y]=\int y dF(y)et\overline{y}=\widehat{\mathbb{E}}[\boldsymbol{y}]=\frac{1}{n}\sum_{i=1} y_ialors que\text{Var}[Y]=\int [y-\mathbb{E}[Y]]^2 dF(y)et\widehat{\text{Var}}[\boldsymbol{y}]=\frac{1}{n}\sum_{i=1} (y_i-\overline{y})^2

Considérons un problème de régression maintenant, avec un modèle de la forme y_i=\boldsymbol{x}_i^\top\boldsymbol{\beta}+\varepsilon_i. Ici, \varepsilon_i est un nombre réel, inconnu. Dans une écriture matricielle, on a \boldsymbol{y}=\boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{\varepsilon}, où cette fois \boldsymbol{\varepsilon} est un vecteur de \mathbb{R}^n (et oui, je suis désolé mais ici \boldsymbol{X} désigne la matrice des covariates, et non pas un vecteur aléatoire… je ferais un billet un jour pour parler du fait que parfois on dit que les \boldsymbol{x} sont donnés et des fois – comme on conditionne suivant \boldsymbol{X}, autrement dit, on les voit comme aléatoires). On peut parfois faire une hypothèse quant à la distribution des résidus. Autrement dit, les \varepsilon_i sont vues comme des réalisations de variables aléatoires \varepsilon_i, ainsi que \boldsymbol{\varepsilon}. On notera ainsi \boldsymbol{\varepsilon}\sim\mathcal{N}(\boldsymbol{0},\boldsymbol{\Sigma}). Ah oui, autre point juste pour perdre les élèves : \text{Var}(\boldsymbol{\varepsilon})=\boldsymbol{\Sigma} alors que \text{Var}(\varepsilon_i)=\sigma^2… Bon, ici comme on suppose les observations indépendentes, et identiquement distribuées, on supposera que \text{Var}(\boldsymbol{\varepsilon})=\boldsymbol{\Sigma}=\sigma^2\mathbb{I}.

Encore une fois, \boldsymbol{\varepsilon} est (par définition) non observable. Par contre, on peut estimer ces résidus : à partir d’un estimateur \widehat{\boldsymbol{\beta}} de \boldsymbol{\beta}, on peut définir \widehat{\boldsymbol{\varepsilon}}=\boldsymbol{y}-\widehat{\boldsymbol{y}}=\boldsymbol{y}-\boldsymbol{x}^\top\widehat{\boldsymbol{\beta}}Histoire de clarifier, je vais plutôt noter  \widehat{\boldsymbol{e}} ces résidus estimés, en utilisant l’estimateur par moindres carrés de \boldsymbol{\beta}. On peut noter que \widehat{\boldsymbol{e}}=(\mathbb{I}-\boldsymbol{H})\boldsymbol{y} où classiquement \boldsymbol{H}=\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top est la matrice de projection sur l’espace engendré par toutes les combinaisons linéaires des variables explicatives. Mais là encore, on peut voir le vecteur (numérique) \widehat{\boldsymbol{e}} comme la réalisation d’une variable aléatoire \widehat{\boldsymbol{E}}. En particulier, \widehat{\boldsymbol{E}}=(\mathbb{I}-\boldsymbol{H})\boldsymbol{Y}=(\mathbb{I}-\boldsymbol{H})\boldsymbol{\varepsilon}\boldsymbol{\varepsilon} est notre vecteur aléatoire, centré, de matrice de variance-covariance \text{Var}(\boldsymbol{\varepsilon})=\sigma^2\mathbb{I}. On peut alors en déduire que\mathbb{E}[\widehat{\boldsymbol{E}}]=(\mathbb{I}-\boldsymbol{H})\mathbb{E}[\boldsymbol{\varepsilon}]=\boldsymbol{0}et\text{Var}[\widehat{\boldsymbol{E}}]=(\mathbb{I}-\boldsymbol{H})\text{Var}[\boldsymbol{\varepsilon}](\mathbb{I}-\boldsymbol{H})^\top=\sigma^2(\mathbb{I}-\boldsymbol{H})(car (\mathbb{I}-\boldsymbol{H}) est idenpotent). Cette dernière relation est particulièrement importante, car on notera que \text{Var}(\widehat{\boldsymbol{E}})\neq\sigma^2\mathbb{I}. En particulier, si on prend un résidu estimé au hasard \text{Var}(\widehat{E}_i)=\sigma^2(1-\boldsymbol{H}_{i,i}) (on avait parlé longuement de \boldsymbol{H}_{i,i} dans un billet récent, on leverage, en particulier on avait vu que \boldsymbol{H}_{i,i}\in[0,1] (on avait discuté la borne inférieur, qui peut être améliorée, en fait \boldsymbol{H}_{i,i}\in(0,1]) de telle sorte que \text{Var}(\widehat{E}_i)\leq\sigma^2. Si on poursuit un peu, on peut regarder la somme des carrés estimés, et noter que\mathbb{E}\big[\sum_{i=1}^n \widehat{E}_i^2\big]=\mathbb{E}[\text{trace}( \widehat{\boldsymbol{E}}\widehat{\boldsymbol{E}}^\top)] =\text{trace}(\mathbb{E}[\text{trace}( \widehat{\boldsymbol{E}}\widehat{\boldsymbol{E}}^\top])i.e.\mathbb{E}\big[\sum_{i=1}^n \widehat{E}_i^2\big]=\sigma^2\text{trace}(\mathbb{I}-\boldsymbol{H})or \text{trace}(\mathbb{I}-\boldsymbol{H})=n-p donc\widehat{\sigma}^2=\frac{1}{n-p}\sum_{i=1}^n \widehat{E}_i^2est un estimateur sans biais de \sigma^2. Et classiquement, on considèrera les résidus Studentisés\widehat{R}_i=\frac{\widehat{E}_i}{\widehat{\sigma}\sqrt{1-\boldsymbol{H}_{i,i}}}Si je voulais résumer un peu, on pourrait dire que\text{Var}(\boldsymbol{E})=\sigma^2\mathbb{I}\widehat{\text{Var}}(\boldsymbol{E})=\widehat{\sigma}^2\mathbb{I}\text{Var}(\widehat{\boldsymbol{E}})=\sigma^2(\mathbb{I}-\boldsymbol{H})\widehat{\text{Var}}(\widehat{\boldsymbol{E}})=\widehat{\sigma}^2(\mathbb{I}-\boldsymbol{H})En espérant que ça clarifie un peu…(?)

On leverage

Last week, in our STT5100 (applied linear models) class, I’ve introduce the hat matrix, and the notion of leverage. In a classical regression model, \boldsymbol{y}=\boldsymbol{X}\boldsymbol{\beta} (in a matrix form), the ordinary least square estimator of parameter \boldsymbol{\beta} is \widehat{\boldsymbol{\beta}}=(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top\boldsymbol{y}The prediction can then be written\widehat{\boldsymbol{y}}=\boldsymbol{X}\widehat{\boldsymbol{\beta}}=\underbrace{\color{blue}{\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top}}_{\color{blue}{\boldsymbol{H}}}\boldsymbol{y}where \color{blue}{\boldsymbol{H}} is called the hat matrix.

The matrix is idempotent, i.e. \boldsymbol{H}^2={\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\textcolor{grey}{\boldsymbol{X}^\top{\boldsymbol{X}}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}}\boldsymbol{X}^\top}={\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top}=\boldsymbol{H}so it can be interpreted as a projection matrix. Furthermore, since\boldsymbol{H}\boldsymbol{X}=\boldsymbol{X} (just do the maths), the projection is on a subspace that contains all the linear combinations of columns of \boldsymbol{X}. One can also observe that \mathbb{I}-\boldsymbol{H} is also a projection matrix. And we can write\boldsymbol{y}=\underbrace{\boldsymbol{H}\boldsymbol{y}}_{\widehat{\boldsymbol{y}}}+\underbrace{(\mathbb{I}-\boldsymbol{H})\boldsymbol{y}}_{\widehat{\boldsymbol{\varepsilon}}}where \widehat{\boldsymbol{y}} is the orthogonal projection of \boldsymbol{y} on the (linear) space of linear combinations of columns of \boldsymbol{X}, and \widehat{\boldsymbol{y}}\perp\widehat{\boldsymbol{\varepsilon}}, which gives the classical interpretation of residuals, being unpredictible (at least with a linear model using variables \boldsymbol{X}).

Let’s move a bit faster now (we’ve seen many other properties last week), and consider elements on the diagonal of matrix \boldsymbol{H}. Recall that we have

so entry \boldsymbol{H}_{i,i} is a measure of the influence of entry \boldsymbol{y}_i on its prediction latex]\widehat{\boldsymbol{y}}_i[/latex].

We have seen that\sum_{i=1}^n\boldsymbol{H}_{i,i}=\text{trace}(\boldsymbol{H})=\text{trace}(\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top)which can be written\sum_{i=1}^n\boldsymbol{H}_{i,i}=\text{trace}\boldsymbol{X}^\top(\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1})=\text{trace}(\mathbb{I})=pwhere classically p=k+1, where k is the number of explanatory variables. Further, since \boldsymbol{H} is idempotent, we can write (from \boldsymbol{H}=\boldsymbol{H}^2) that\boldsymbol{H}_{i,i}=\boldsymbol{H}_{i,i}^2 + \sum_{j\neq i}\boldsymbol{H}_{i,j}\boldsymbol{H}_{j,i}=\boldsymbol{H}_{i,i}^2 + \sum_{j\neq i}\boldsymbol{H}_{i,j}^2One the one hand, since the second term is positive \boldsymbol{H}_{i,i}\geq\boldsymbol{H}_{i,i}^2, i.e. 1\geq\boldsymbol{H}_{i,i}. And since both terms are positive, then \boldsymbol{H}_{i,i}\in[0,1]. And there was a question in the course on the sharpeness of the bounds.

Using Anscombe’s dataset, we’ve seen that it was possible to get a leverage of 1. Using something rather similar

df = data.frame(x = c(rep(1,10),6), y = c(1:10,8))
plot(df)

we obtain

model = lm(y~x,data=df)
abline(model,col="red",lwd=2)
H = lm.influence(model)$hat
plot(1:11,H,type="h")

The very last observation, the one one the right, is here extremely influencial : if we remove it, the model is completely different ! And here, we reach the upper bound, \boldsymbol{H}_{11,11}=1. Observe that all other points are equally influencial, and because on the constraint on the trace of the matrix, \boldsymbol{H}_{i,i}=1/10 when i\in\{1,2,\cdots,10\}.

Now, what about the lower bound ? In order to have some sort of “non-influencial” observations, consider the two following case.

  • the case where one observation (below the first one) is such that \widehat{\boldsymbol{y}}_{i}=\boldsymbol{y}_{i} (perfect prediction)
  • the case where one observation (below the tenth one) is such that \boldsymbol{x}_{i}=\overline{\boldsymbol{x}} and \boldsymbol{y}_{i}=\overline{\boldsymbol{y}} (from the first order condition – or normal equation), the fitted regression line always go through point (\overline{\boldsymbol{x}},\overline{\boldsymbol{y}})

Let us move two observations from our dataset,

mean(c(4,rep(1,8),6))
[1] 1.8
df = data.frame(x = c(4,rep(1,8),6,1.8),
y = c(predict(model,newdata=data.frame(x=4)),
2:9,8,
predict(model,newdata=data.frame(x=1.8))))

We now have

If we compute the leverages, we obtain

model = lm(y~x,data=df)
H = lm.influence(model)$hat
plot(1:11,H,type="h")

so, for the first observation, its leverage actually increased (the blue part), and for the tenth one, we have the lowest influence, but it is not zero. Is it possible to reach zero ?

Here, observe that for the tenth observation, \boldsymbol{H}_{i,i}=1/n. And actually, that’s the best we can do… We can prove that, in the case of a simple regression (as above)\boldsymbol{H}_{i,i}=\frac{1}{n}+\frac{(x_i-\overline{x})^2}{n\text{Var}(x)}which is minimum when x_i=\overline{x}, and then \boldsymbol{H}_{i,i}=1/n, otherwise \boldsymbol{H}_{i,i}>1/n. And this property is also valid in a multiple regression (as soon as an intercept is included in the regression – which should always be the case). To prove that result, let \tilde{\boldsymbol{X}} denote the matrix of centered variables \boldsymbol{X}, then we can prove that \boldsymbol{H}_{i,i}=\frac{1}{n}+\big[\tilde{\boldsymbol{X}}(\tilde{\boldsymbol{X}}^\top\tilde{\boldsymbol{X}})^{-1}\tilde{\boldsymbol{X}}^\top\big]_{i,i}(which is basically a matrix version of the previous equation).

I can maybe add another comment on Anscombe’s data. We’ve seen that on the right that we did reach 1. But I did not prove it. One way to prove it is actually to focus on the remaining n-1 points, on the left. Those have all the same x values. We can prove that if \boldsymbol{X}_{i_1}=\boldsymbol{X}_{i_2}, then \boldsymbol{H}_{i_1,i_2}=\boldsymbol{X}_{i_1}^\top(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}_{i_2}=\boldsymbol{H}_{i_1,i_1}hence, using the relationship obtained since the hat matrix is idempotent\boldsymbol{H}_{i_1,i_1}=2\boldsymbol{H}_{i_1,i_1}^2+\sum_{j\notin\{i_1,i_2\}}\boldsymbol{H}_{i_1,j}^2thus, we now have\boldsymbol{H}_{i_1,i_1}\big(1-2\boldsymbol{H}_{i_1,i_1}\big)>0i.e. \boldsymbol{H}_{i_1,i_1}\in[0,1/2], where the upper bound becomes 1/(n-1) “duplicates”. So for n-1 \boldsymbol{H}_{i,i}‘s, we have values below 1/(n-1), the last one should be below 1 and the sum has to be k=2 . So we have the value of the n \boldsymbol{H}_{i,i}‘s.

 

Insurance data science : Pictures

At the Summer School of the Swiss Association of Actuaries, in Lausanne, following the part of Jean-Philippe Boucher (UQAM) on telematic data, I will start talking about pictures this Wednesday. Slides are available online

Ewen Gallic (AMSE) will present a tutorial on satellite pictures, and a simple classification problem, related to Alzeimher detection.

We will try to identify what is on the following pictures, starting with the car

(we will see that the car is indeed identified)

a skier,

and a fire,

We will also discuss previous pictures from the summer school

Insurance data science : use and value of unusual data #1

Next week, with , I will be at the Summer School of the Swiss Association of Actuaries, in Lausanne, with Jean-Philippe Boucher (UQAM) and Ewen Gallic (AMSE).

I will give an introductionary talk on Monday morning, and the slides are now available

There will be some hands-on applications, on R. I will share some codes in the slides.

SIDE Summer School, day 1

This morning, we start the SIdE (Italian Econometric Association) Summer School, on Machine Learning Algorithms for Econometricians. Emmanuel Flachaire will start with a presentation of nonparametric econometric techniques. I will then get back to the geometry of (standard) econometric techniques, to introduce kernels. The first series of slides are online.

I will then spend more time on the (popular) idea of “least squares” and mention other loss functions. Slides are online.

What it the interpretation of the diagonal for a ROC curve

Last Friday, we discussed the use of ROC curves to describe the goodness of a classifier. I did say that I will post a brief paragraph on the interpretation of the diagonal. If you look around some say that it describes the “strategy of randomly guessing a class“, that it is obtained with “a diagnostic test that is no better than chance level“, even obtained by “making a prediction by tossing of an unbiased coin“.

Let us get back to ROC curves to illustrate those points. Consider a very simple dataset with 10 observations (that is not linearly separable)

x1 = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
x2 = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
y = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x1,x2=x2,y=as.factor(y))

here we can check that, indeed, it is not separable

plot(x1,x2,col=c("red","blue")[1+y],pch=19)

Consider a logistic regression (the course is on linear models)

reg = glm(y~x1+x2,data=df,family=binomial(link = "logit"))

but any model here can be used… We can use our own function

Y=df$y
S=predict(reg)
roc.curve=function(s,print=FALSE){
  Ps=(S>=s)*1
   
  FP=sum((Ps==1)*(Y==0))/sum(Y==0)
     
  TP=sum((Ps==1)*(Y==1))/sum(Y==1)if(print==TRUE){print(table(Observed=Y,Predicted=Ps))}
   
vect=c(FP,TP)names(vect)=c("FPR","TPR")return(vect)}

or any R package actually

library(ROCR)

perf=performance(prediction(S,Y),"tpr","fpr")

We can plot the two simultaneously here

plot(performance(prediction(S,Y),"tpr","fpr"))
V=Vectorize(roc.curve)(seq(-5,5,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

So our code works just fine, here. Let us consider various strategies that should lead us to the diagonal.

The first one is : everyone has the same probability (say 50%)

S=rep(.5,10)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(0,1,length=251))points(V[1,],V[2,])

Indeed, we have the diagonal. But to be honest, we have only two points here : (0,0) and (1,1). Claiming that we have a straight line is not very satisfying… Actually, note that we have this situation whatever the probability we choose

S=rep(.2,10)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(0,1,length=251))points(V[1,],V[2,])

We can try another strategy, like “making a prediction by tossing of an unbiased coin“. This is what we obtain

set.seed(1)

S=sample(0:1,size=10,replace=TRUE)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(0,1,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

We can also try some sort of “random classifier”, where we choose the score randomly, say uniform on the unit interval

set.seed(1)

S=runif(10)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(0,1,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

Let us try to go further on that one. For convenience, let us consider another function to plot the ROC curve

V=Vectorize(roc.curve)(seq(0,1,length=251))

roc_curve=Vectorize(function(x) max(V[2,which(V[1,]<=x)]))

We have the same line as previously

x=seq(0,1,by=.025)

y=roc_curve(x)lines(x,y,type="s",col="red")

But now, consider many scoring strategies, all randomly chosen

MY=matrix(NA,500,length(y))for(i in 1:500){
  
S=runif(10)
  
V=Vectorize(roc.curve)(seq(0,1,length=251))
  
MY[i,]=roc_curve(x)}plot(performance(prediction(S,df$y),"tpr","fpr"),col="white")for(i in 1:500){lines(x,MY[i,],col=rgb(0,0,1,.3),type="s")}lines(c(0,x),c(0,apply(MY,2,mean)),col="red",type="s",lwd=3)segments(0,0,1,1,col="light blue")

The red line is the average of all random classifiers. It is not a straight line, be we observe oscillations around the diagonal.

Consider a dataset with more observations


myocarde = read.table("http://freakonometrics.free.fr/myocarde.csv",head=TRUE, sep=";")

myocarde$PRONO = (myocarde$PRONO=="SURVIE")*1

reg = glm(PRONO~.,data=myocarde,family=binomial(link = "logit"))

Y=myocarde$PRONO

S=predict(reg)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(-5,5,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

Here is a “random classifier” where we draw scores randomly on the unit interval

S=runif(nrow(myocarde)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(-5,5,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

And if we do that 500 times, we obtain, on average

MY=matrix(NA,500,length(y))for(i in 1:500){
  
S=runif(length(Y))
  
V=Vectorize(roc.curve)(seq(0,1,length=251))
  
MY[i,]=roc_curve(x)}plot(performance(prediction(S,Y),"tpr","fpr"),col="white")for(i in 1:500){lines(x,MY[i,],col=rgb(0,0,1,.3),type="s")}lines(c(0,x),c(0,apply(MY,2,mean)),col="red",type="s",lwd=3)segments(0,0,1,1,col="light blue")

So, it looks like me might say that the diagonal is what we have, on average, when drawing randomly scores on the unit interval…

I did mention that an interesting visual tool could be related to the use of the Kolmogorov Smirnov statistic on classifiers. We can plot the two empirical cumulative distribution functions of the scores, given the response Y

score=data.frame(yobs=Y,
                 ypred=predict(reg,type="response"))

f0=c(0,sort(score$ypred[score$yobs==0]),1)

f1=c(0,sort(score$ypred[score$yobs==1]),1)plot(f0,(0:(length(f0)-1))/(length(f0)-1),col="red",type="s",lwd=2,xlim=0:1)lines(f1,(0:(length(f1)-1))/(length(f1)-1),col="blue",type="s",lwd=2)

we can also look at the distribution of the score, with the histogram (or density estimates)

S=score$ypred

hist(S[Y==0],col=rgb(1,0,0,.2),
     probability=TRUE,breaks=(0:10)/10,border="white")hist(S[Y==1],col=rgb(0,0,1,.2),
     probability=TRUE,breaks=(0:10)/10,border="white",add=TRUE)lines(density(S[Y==0]),col="red",lwd=2,xlim=c(0,1))lines(density(S[Y==1]),col="blue",lwd=2)

The underlying idea is the following : we do have a “perfect classifier” (top left corner)

is the supports of the scores do not overlap

otherwise, we should have errors. That the case below

we in 10% of the cases, we might have misclassification

or even more missclassification, with overlapping supports

Now, we have the diagonal

when the two conditional distributions of the scores are identical

Of course, that only valid when n is very large, otherwise, it is only what we observe on average….

Exotic link functions for GLMs

In my previous post on GLMs, I discussed power link functions. But there are much more links that can be used :

  • The square root link (for the Poisson model)

Consider some random variable Y with mean \mu and variance \sigma^2. Using Taylor’s expansion,g(Y)\sim g(\mu)+(Y-\mu)g'(\mu)+\frac{1}{2}(Y-\mu)^2g''(\mu)we can write\mathbb{E}[g(Y)]\sim g(\mu)+\frac{\sigma^2}{2}g''(\mu) \text{Var}[g(Y)]\sim [g'(\mu)]^2\sigma^2

Assume that Y\sim\mathcal{P}(\lambda), a consider a square root transformation, g(y)=\sqrt{y}, then the second equality becomes \text{Var}[\sqrt{Y}]\sim \left[\frac{1}{2\sqrt{\mathbb{E}[Y]}}\right]^2\text{Var}[Y]=\frac{1}{4}

So, somehow, with a square-root transformation, we have variance stability, which might be interpreted as some homoscedasticity.

  • The complementary log-log function for the Bernoulli model

Assume that the true variable of interest is a Poisson one, N|\mathbf{X}=\mathbf{x}\sim\mathcal{P}(\lambda_{\mathbf{x}}) where \lambda_{\mathbf{x}}=\exp[\mathbf{x}^T\mathbf{\beta}]Thus,\mathbb{P}[N=0|\mathbf{X}=\mathbf{x}]=\exp[-\lambda_{\mathbf{x}}]=\exp[-(\exp[\mathbf{x}^T\mathbf{\beta}])]while\mathbb{P}[N>0|\mathbf{X}=\mathbf{x}]=1-\exp[-(\exp[\mathbf{x}^T\mathbf{\beta}])]=H(\mathbf{x}^T\mathbf{\beta})where H(s)=1-\exp[-\exp(s)]. Let Y=\mathbf{1}(N>0). The previous model seems like a Bernoulli regression with H as link function,\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=H(\mathbf{x}^T\mathbf{\beta})

So, assume now that instead of observing N we observe Y=\boldsymbol{1}(N>0). In that case, running a Bernoulli regression with a complementary log-log link function would be the same (?) as running first a Poisson regression on the original data, and then use it on our binary variable, zero vs. non-zero. Let us generate some data, and see what’s going on. Let us compare e^{\lambda_{\mathbf{x}}} and p_{\mathbf{x}} obtained from a standard logistic regression

n=563
set.seed(1)
base=data.frame(X1=rnorm(n),X2=rnorm(n))
lambda=base$X1+base$X2
base$Y=rpois(n,exp(lambda))
regPois = glm(Y~.,data=base,family=poisson(link="log"))
lambda = predict(regPois,type="response")
regBinom = glm((Y==0)~.,data=base,family=binomial(link="probit"))
prob = predict(regBinom, type="response")
plot(prob,exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

What if p_{\mathbf{x}} was obtained from a Bernoulli regression, with a cloglog link function ?

regBinom = glm((Y>0)~.,data=base,family=binomial(link="cloglog"))
prob = predict(regBinom, type="response")
plot(prob,1-exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

It looks like the fit is very good here ! Now, what if we have real data, like the dataset from A Theory of Extramarital Affairs, by Ray Fair, published in 1978 in the Journal of Political Economy (with 563 observations, and nine variables)

base = read.table("http://freakonometrics.free.fr/baseaffairs.txt",header=TRUE)
str(base)
x=base$SEX
base$SEX="M"
base$SEX[x=="0"]="F"
x=base$CHILDREN
base$CHILDREN="YES"
base$CHILDREN[x==0]="NO"
regPois = glm(Y~.,data=base,family=poisson(link="log"))
lambda = predict(regPois,type="response")
regBinom = glm((Y==0)~.,data=base,family=binomial(link="probit"))
prob = predict(regBinom, type="response")
plot(prob,exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

In that case the two models are very different. But actually, so is the second one

regBinom = glm((Y>0)~.,data=base,family=binomial(link="cloglog"))
prob = predict(regBinom, type="response")
plot(prob,1-exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

How can we interpret that ? Could it be because the Poisson model is not good ? Actually, if we run a zero-inflated model here,

library(pscl)
regZIP = zeroinfl(Y ~ . | ., data = base)
summary(regZIP)
 
Count model coefficients (poisson with log link):
             Estimate Std. Error z value Pr(>|z|)    
(Intercept) -0.002274   0.048413  -0.047    0.963    
X1           1.019814   0.026186  38.945   <2e-16 ***
X2           1.004814   0.024172  41.570   <2e-16 *** 
Zero-inflation model coefficients (binomial with logit link): 
            Estimate Std. Error z value Pr(>|z|)  
(Intercept) -4.90190    2.07846  -2.358   0.0184 *
X1          -2.00227    0.86897  -2.304   0.0212 *
X2          -0.01545    0.96121  -0.016   0.9872  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Hence, we reject here the Poisson distribution assumption, because of the inflation of zeros… It looks like the cloglog link can be used to check if the Poisson distribution is a good model, or not…

GLMs: link vs. distribution

Usually, when I give a course on GLMs, I try to insist on the fact that the link function is probably more important than the distribution. In order to illustrate, consider the following dataset, with 5 observations

x = c(1,2,3,4,5)
y = c(1,2,4,2,6)
base = data.frame(x,y)

Then consider several model, with various distributions, and either an identity link (and in that case \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbf{x}^T\mathbf{\beta}) or a log link function (so that \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=e^{\mathbf{x}^T\mathbf{\beta}})

regNId = glm(y~x,family=gaussian(link="identity"),data=base)
regNlog = glm(y~x,family=gaussian(link="log"),data=base)
regPId = glm(y~x,family=poisson(link="identity"),data=base)
regPlog = glm(y~x,family=poisson(link="log"),data=base)
regGId = glm(y~x,family=Gamma(link="identity"),data=base)
regGlog = glm(y~x,family=Gamma(link="log"),data=base)
regIGId = glm(y~x,family=inverse.gaussian(link="identity"),data=base)
regIGlog = glm(y~x,family=inverse.gaussian(link="log"),data=base

One can also consider some Tweedie distribution, to be even more general

library(statmod)
regTwId = glm(y~x,family=tweedie(var.power=1.5,link.power=1),data=base)
regTwlog = glm(y~x,family=tweedie(var.power=1.5,link.power=0),data=base)

Consider the prediction obtained in the first case, with the linear link function

library(RColorBrewer)
darkcols = brewer.pal(8, "Dark2")
plot(x,y,pch=19)
abline(regNId,col=darkcols[1])
abline(regPId,col=darkcols[2])
abline(regGId,col=darkcols[3])
abline(regIGId,col=darkcols[4])
abline(regTwId,lty=2)

The predictions are very very close, aren’t they ? In the case of the exponential prediction, we obtain

plot(x,y,pch=19)
u=seq(.8,5.2,by=.01)
lines(u,predict(regNlog,newdata=data.frame(x=u),type="response"),col=darkcols[1])
lines(u,predict(regPlog,newdata=data.frame(x=u),type="response"),col=darkcols[2])
lines(u,predict(regGlog,newdata=data.frame(x=u),type="response"),col=darkcols[3])
lines(u,predict(regIGlog,newdata=data.frame(x=u),type="response"),col=darkcols[4])
lines(u,predict(regTwlog,newdata=data.frame(x=u),type="response"),lty=2)

We can actually look closer. For instance, in the linear case, consider the slope obtained with a Tweedie model (that will include all the parametric familes mentioned here, actually)

pente=function(gamma) summary(glm(y~x,family=tweedie(var.power=gamma,link.power=1),data=base))$coefficients[2,1:2]
Vgamma = seq(-.5,3.5,by=.05)
Vpente = Vectorize(pente)(Vgamma)
plot(Vgamma,Vpente[1,],type="l",lwd=3,ylim=c(.965,1.03),xlab="power",ylab="slope")

The slope here is always very very close to one ! Even more if we add a confidence interval

plot(Vgamma,Vpente[1,])
lines(Vgamma,Vpente[1,]+1.96*Vpente[2,],lty=2)
lines(Vgamma,Vpente[1,]-1.96*Vpente[2,],lty=2)

Heuristically, for the Gamma regression, or the Inverse Gaussian one, because the variance is a power of the prediction, if the prediction is small (here on the left), the variance should be small. So, on the left of the graph, the error should be small with a higher power for the variance function. And that’s indeed what we observe here

erreur=function(gamma) predict(glm(y~x,family=tweedie(var.power=gamma,link.power=1),data=base),newdata=data.frame(x=1),type="response")-y[x==1] 
Verreur = Vectorize(erreur)(Vgamma)
plot(Vgamma,Verreur,type="l",lwd=3,ylim=c(-.1,.04),xlab="power",ylab="error")
abline(h=0,lty=2)

Of course, we can do the same with the exponential models

pente=function(gamma) summary(glm(y~x,family=tweedie(var.power=gamma,link.power=0),data=base))$coefficients[2,1:2]
Vpente = Vectorize(pente)(Vgamma)
plot(Vgamma,Vpente[1,],type="l",lwd=3)

or, if we add the confidence bands, we obtain

plot(Vgamma,Vpente[1,],ylim=c(0,.8),type="l",lwd=3,xlab="power",ylab="slope")
lines(Vgamma,Vpente[1,]+1.96*Vpente[2,],lty=2)
lines(Vgamma,Vpente[1,]-1.96*Vpente[2,],lty=2)

So here also, the “slope” is rather similar… And if we look at the error we make on the left part of the graph, we obtain

erreur=function(gamma) predict(glm(y~x,family=tweedie(var.power=gamma,link.power=0),data=base),newdata=data.frame(x=1),type="response")-y[x==1] 
Verreur = Vectorize(erreur)(Vgamma)
plot(Vgamma,Verreur,type="l",lwd=3,ylim=c(.001,.32),xlab="power",ylab="error")

So my point is that the distribution is usually not the most important point on GLMs, even if chapters of books on GLMs are distribution based… But as mentioned in an another post, if you consider a nonlinear transformation, like we have with GAMs, the story is more complicated…

Bailey (1963) and Poisson regression on two factors

Consider the following dataset, from A Theory of Extramarital Affairs, by Ray Fair, published in 1978 in the Journal of Political Economy, with 563 observations, and nine variables : eight covariates, and the variable of interest, the number of extramarital affairs, over a year,

base = read.table("http://freakonometrics.free.fr/baseaffairs.txt",header=TRUE)
str(base)
'data.frame':	563 obs. of  9 variables:
 $ SEX         : int  1 0 0 1 1 0 0 1 0 1 ...
 $ AGE         : num  37 27 32 57 22 32 22 57 32 22 ...
 $ YEARMARRIAGE: num  10 4 15 15 0.75 1.5 0.75 15 15 1.5 ...
 $ CHILDREN    : int  0 0 1 1 0 0 0 1 1 0 ...
 $ RELIGIOUS   : int  3 4 1 5 2 2 2 2 4 4 ...
 $ EDUCATION   : int  18 14 12 18 17 17 12 14 16 14 ...
 $ OCCUPATION  : int  7 6 1 6 6 5 1 4 1 4 ...
 $ SATISFACTION: int  4 4 4 5 3 5 3 4 2 5 ...
 $ Y           : int  0 0 0 0 0 0 0 0 0 0 ...

Let us focus on two categorical covariates, related to the importance of religion, and the occupation

df=data.frame(y=base$Y,
              religion=as.factor(base$RELIGIOUS),
              occupation=as.factor(base$OCCUPATION),
              expo = 1)
(E=xtabs(expo~religion+occupation,data=df))
        occupation
religion  1  2  3  4  5  6  7
       1  4  1  8  4 16  9  0
       2 23  3 11 17 56 36  6
       3 29  1 10 12 39 25  2
       4 38  7 12 21 59 44  2
       5 13  1  3 10 19 19  3
(N=xtabs(y~religion+occupation,data=df))
        occupation
religion  1  2  3  4  5  6  7
       1  4  1 13  3 13  7  0
       2  1  1 13 10 25 43 10
       3 15  0 12 11 34 35  1
       4 24  1  3 15 11  9 10
       5  6  0  0  6 11  7  0

The two tables above are the exposure (number of observations) and the number of extramarital affairs, here as contingency tables. Without any covariate, one can assume that N\sim\mathcal{P}(\lambda\cdot E), where \lambda would be

sum(N)/sum(E)
[1] 0.6305506

The idea with the margin method is to assume that N_{i,j}=E_{i,j}\cdot\lambda_{i,j} where \lambda_{i,j}=A_i\cdot B_j. Bailey (1963) added two series of constraints : per row, \sum_j N_{i,j}=\sum_j E_{i,j}\cdot A_i\cdot B_j for any i and similarly, for any j \sum_i N_{i,j}=\sum_i E_{i,j}\cdot A_i\cdot B_jFrom the first series of constraints, write A_i=\frac{\sum_j N_{i,j}}{\sum_j E_{i,j}\cdot B_j} and use the second series to write B_j=\frac{\sum_i N_{i,j}}{\sum_i E_{i,j}\cdot A_i}Because we need A_i‘s to compute B_j‘s, and conversely, it is natural to consider some iterative procedure to solve it. Observe that we do not have unicity…

Consider here some starting values for A_i‘s and B_j‘s

A=rep(1,length(levels(df$religion)))
B=rep(1,length(levels(df$occupation)))*sum(N)/sum(E)
A
[1] 1 1 1 1 1
B
[1] 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506

The predicted number of extramarital affairs would be \hat N_{i,j}=E_{i,j}\cdot\hat A_i\cdot \hat B_j

E * A%*%t(B)
        occupation
religion          1          2          3          4          5          6          7
       1  2.5222025  0.6305506  5.0444050  2.5222025 10.0888099  5.6749556  0.0000000
       2 14.5026643  1.8916519  6.9360568 10.7193606 35.3108348 22.6998224  3.7833037
       3 18.2859680  0.6305506  6.3055062  7.5666075 24.5914742 15.7637655  1.2611012
       4 23.9609236  4.4138544  7.5666075 13.2415631 37.2024867 27.7442274  1.2611012
       5  8.1971581  0.6305506  1.8916519  6.3055062 11.9804618 11.9804618  1.8916519
sum(B*E[1,])
[1] 26.48313
sum(B*E[2,])
[1] 95.84369
apply(t(B*t(E)),1,sum)
        1         2         3         4         5 
 26.48313  95.84369  74.40497 115.39076  42.87744 
sum(A*E[,1])
[1] 107
sum(A*E[,2])
[1] 13
apply(A*E,2,sum)
  1   2   3   4   5   6   7 
107  13  44  64 189 133  13

From expressions above, observe that one can very easily write expressions of A_i‘s and B_j‘s as functions of B_j‘s and A_i‘s respectively

A=apply(N,1,sum)/apply(t(B*t(E)),1,sum)
B=apply(N,2,sum)/apply(A*E,2,sum)

Let it iterate one thousand times

for(i in 1:1000){
  A=apply(N,1,sum)/apply(t(B*t(E)),1,sum)
  B=apply(N,2,sum)/apply(A*E,2,sum)
}

We obtain here

A
        1         2         3         4         5 
1.5404346 1.0447195 1.4825650 0.6553159 0.6634763 
B
        1         2         3         4         5         6         7 
0.4685515 0.2629769 0.8454435 0.7245310 0.4889697 0.7770553 1.6753750 
E * A%*%t(B)
        occupation
religion          1          2          3          4          5          6          7
       1  2.8870914  0.4050987 10.4188024  4.4643702 12.0516123 10.7730250  0.0000000
       2 11.2586111  0.8242113  9.7157637 12.8678376 28.6068235 29.2249717 10.5017811
       3 20.1450811  0.3898804 12.5342484 12.8899708 28.2722423 28.8008726  4.9677044
       4 11.6678702  1.2063307  6.6483904  9.9707299 18.9053460 22.4055332  2.1957997
       5  4.0413463  0.1744790  1.6827951  4.8070914  6.1639760  9.7955975  3.3347148

That is our prediction, per category, of the number of affairs. Observe that here, sums per row are equal to observed numbers,

apply(N,1,sum)
  1   2   3   4   5 
 41 103 108  73  30 
apply(E * A%*%t(B),1,sum)
  1   2   3   4   5 
 41 103 108  73  30

as well as sums per colums

apply(N,2,sum)
  1   2   3   4   5   6   7 
 50   3  41  45  94 101  21 
apply(E * A%*%t(B),2,sum)
  1   2   3   4   5   6   7 
 50   3  41  45  94 101  21

Now, why should I mention that here, in the section on the Poisson regression in our course ? Because actually, this is exactly what we get if we run a Poisson regression on those two covariates

reg=glm(y~religion+occupation,data=df,family=poisson)
summary(reg)
Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -0.32604    0.21325  -1.529 0.126285    
religion2   -0.38832    0.18791  -2.066 0.038783 *  
religion3   -0.03829    0.18585  -0.206 0.836771    
religion4   -0.85470    0.19757  -4.326 1.52e-05 ***
religion5   -0.84233    0.24416  -3.450 0.000561 ***
occupation2 -0.57758    0.59549  -0.970 0.332083    
occupation3  0.59022    0.21349   2.765 0.005699 ** 
occupation4  0.43588    0.20603   2.116 0.034381 *  
occupation5  0.04265    0.17590   0.242 0.808399    
occupation6  0.50587    0.17360   2.914 0.003569 ** 
occupation7  1.27415    0.26298   4.845 1.27e-06 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1

First of all, observe that the total sum of predictions equals the total sum of observations

yp = predict(reg,type="response")
sum(yp)
[1] 355
sum(df$y)
[1] 355

But actually, the predicted number of affairs, for our 35 classes, is exactly what we got using Bailey’s technique

xtabs(yp~df$religion+df$occupation)
           df$occupation
df$religion          1          2          3          4          5          6          7
          1  2.8870914  0.4050987 10.4188024  4.4643702 12.0516123 10.7730250  0.0000000
          2 11.2586112  0.8242113  9.7157637 12.8678376 28.6068235 29.2249717 10.5017811
          3 20.1450813  0.3898804 12.5342484 12.8899708 28.2722424 28.8008726  4.9677044
          4 11.6678703  1.2063307  6.6483904  9.9707300 18.9053460 22.4055332  2.1957997
          5  4.0413464  0.1744790  1.6827951  4.8070914  6.1639761  9.7955975  3.3347148
E * A%*%t(B)
        occupation
religion          1          2          3          4          5          6          7
       1  2.8870914  0.4050987 10.4188024  4.4643702 12.0516123 10.7730250  0.0000000
       2 11.2586111  0.8242113  9.7157637 12.8678376 28.6068235 29.2249717 10.5017811
       3 20.1450811  0.3898804 12.5342484 12.8899708 28.2722423 28.8008726  4.9677044
       4 11.6678702  1.2063307  6.6483904  9.9707299 18.9053460 22.4055332  2.1957997
       5  4.0413463  0.1744790  1.6827951  4.8070914  6.1639760  9.7955975  3.3347148

To be more specific, up to a multiplicate constant, series of coefficients are equal here, e.g. for A_i‘s

a=exp(coefficients(reg)[1]+c(0,coefficients(reg)[2:5]))
a/a[1]
          religion2 religion3 religion4 religion5 
1.0000000 0.6781979 0.9624329 0.4254098 0.4307072 
A/A[1]
        1         2         3         4         5 
1.0000000 0.6781979 0.9624329 0.4254098 0.4307072

but also for B_j‘s

b=exp(coefficients(reg)[1]+c(0,coefficients(reg)[6:11]))
b/b[1]
            occupation2 occupation3 occupation4 occupation5 occupation6 occupation7 
  1.0000000   0.5612551   1.8043769   1.5463210   1.0435773   1.6584203   3.5756477 
B/B[1]
        1         2         3         4         5         6         7 
1.0000000 0.5612551 1.8043770 1.5463210 1.0435773 1.6584203 3.5756478

This will have major implications in non-life insurance models (for claims reserving).