Category Archives: Course

Insurance data science : Pictures

At the Summer School of the Swiss Association of Actuaries, in Lausanne, following the part of Jean-Philippe Boucher (UQAM) on telematic data, I will start talking about pictures this Wednesday. Slides are available online

Ewen Gallic (AMSE) will present a tutorial on satellite pictures, and a simple classification problem, related to Alzeimher detection.

We will try to identify what is on the following pictures, starting with the car

(we will see that the car is indeed identified)

a skier,

and a fire,

We will also discuss previous pictures from the summer school

Insurance data science : use and value of unusual data #1

Next week, with , I will be at the Summer School of the Swiss Association of Actuaries, in Lausanne, with Jean-Philippe Boucher (UQAM) and Ewen Gallic (AMSE).

I will give an introductionary talk on Monday morning, and the slides are now available

There will be some hands-on applications, on R. I will share some codes in the slides.

SIDE Summer School, day 1

This morning, we start the SIdE (Italian Econometric Association) Summer School, on Machine Learning Algorithms for Econometricians. Emmanuel Flachaire will start with a presentation of nonparametric econometric techniques. I will then get back to the geometry of (standard) econometric techniques, to introduce kernels. The first series of slides are online.

I will then spend more time on the (popular) idea of “least squares” and mention other loss functions. Slides are online.

What it the interpretation of the diagonal for a ROC curve

Last Friday, we discussed the use of ROC curves to describe the goodness of a classifier. I did say that I will post a brief paragraph on the interpretation of the diagonal. If you look around some say that it describes the “strategy of randomly guessing a class“, that it is obtained with “a diagnostic test that is no better than chance level“, even obtained by “making a prediction by tossing of an unbiased coin“.

Let us get back to ROC curves to illustrate those points. Consider a very simple dataset with 10 observations (that is not linearly separable)

x1 = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
x2 = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
y = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x1,x2=x2,y=as.factor(y))

here we can check that, indeed, it is not separable

plot(x1,x2,col=c("red","blue")[1+y],pch=19)

Consider a logistic regression (the course is on linear models)

reg = glm(y~x1+x2,data=df,family=binomial(link = "logit"))

but any model here can be used… We can use our own function

Y=df$y
S=predict(reg)
roc.curve=function(s,print=FALSE){
  Ps=(S>=s)*1
   
  FP=sum((Ps==1)*(Y==0))/sum(Y==0)
     
  TP=sum((Ps==1)*(Y==1))/sum(Y==1)if(print==TRUE){print(table(Observed=Y,Predicted=Ps))}
   
vect=c(FP,TP)names(vect)=c("FPR","TPR")return(vect)}

or any R package actually

library(ROCR)

perf=performance(prediction(S,Y),"tpr","fpr")

We can plot the two simultaneously here

plot(performance(prediction(S,Y),"tpr","fpr"))
V=Vectorize(roc.curve)(seq(-5,5,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

So our code works just fine, here. Let us consider various strategies that should lead us to the diagonal.

The first one is : everyone has the same probability (say 50%)

S=rep(.5,10)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(0,1,length=251))points(V[1,],V[2,])

Indeed, we have the diagonal. But to be honest, we have only two points here : (0,0) and (1,1). Claiming that we have a straight line is not very satisfying… Actually, note that we have this situation whatever the probability we choose

S=rep(.2,10)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(0,1,length=251))points(V[1,],V[2,])

We can try another strategy, like “making a prediction by tossing of an unbiased coin“. This is what we obtain

set.seed(1)

S=sample(0:1,size=10,replace=TRUE)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(0,1,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

We can also try some sort of “random classifier”, where we choose the score randomly, say uniform on the unit interval

set.seed(1)

S=runif(10)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(0,1,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

Let us try to go further on that one. For convenience, let us consider another function to plot the ROC curve

V=Vectorize(roc.curve)(seq(0,1,length=251))

roc_curve=Vectorize(function(x) max(V[2,which(V[1,]<=x)]))

We have the same line as previously

x=seq(0,1,by=.025)

y=roc_curve(x)lines(x,y,type="s",col="red")

But now, consider many scoring strategies, all randomly chosen

MY=matrix(NA,500,length(y))for(i in 1:500){
  
S=runif(10)
  
V=Vectorize(roc.curve)(seq(0,1,length=251))
  
MY[i,]=roc_curve(x)}plot(performance(prediction(S,df$y),"tpr","fpr"),col="white")for(i in 1:500){lines(x,MY[i,],col=rgb(0,0,1,.3),type="s")}lines(c(0,x),c(0,apply(MY,2,mean)),col="red",type="s",lwd=3)segments(0,0,1,1,col="light blue")

The red line is the average of all random classifiers. It is not a straight line, be we observe oscillations around the diagonal.

Consider a dataset with more observations


myocarde = read.table("http://freakonometrics.free.fr/myocarde.csv",head=TRUE, sep=";")

myocarde$PRONO = (myocarde$PRONO=="SURVIE")*1

reg = glm(PRONO~.,data=myocarde,family=binomial(link = "logit"))

Y=myocarde$PRONO

S=predict(reg)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(-5,5,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

Here is a “random classifier” where we draw scores randomly on the unit interval

S=runif(nrow(myocarde)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(-5,5,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

And if we do that 500 times, we obtain, on average

MY=matrix(NA,500,length(y))for(i in 1:500){
  
S=runif(length(Y))
  
V=Vectorize(roc.curve)(seq(0,1,length=251))
  
MY[i,]=roc_curve(x)}plot(performance(prediction(S,Y),"tpr","fpr"),col="white")for(i in 1:500){lines(x,MY[i,],col=rgb(0,0,1,.3),type="s")}lines(c(0,x),c(0,apply(MY,2,mean)),col="red",type="s",lwd=3)segments(0,0,1,1,col="light blue")

So, it looks like me might say that the diagonal is what we have, on average, when drawing randomly scores on the unit interval…

I did mention that an interesting visual tool could be related to the use of the Kolmogorov Smirnov statistic on classifiers. We can plot the two empirical cumulative distribution functions of the scores, given the response Y

score=data.frame(yobs=Y,
                 ypred=predict(reg,type="response"))

f0=c(0,sort(score$ypred[score$yobs==0]),1)

f1=c(0,sort(score$ypred[score$yobs==1]),1)plot(f0,(0:(length(f0)-1))/(length(f0)-1),col="red",type="s",lwd=2,xlim=0:1)lines(f1,(0:(length(f1)-1))/(length(f1)-1),col="blue",type="s",lwd=2)

we can also look at the distribution of the score, with the histogram (or density estimates)

S=score$ypred

hist(S[Y==0],col=rgb(1,0,0,.2),
     probability=TRUE,breaks=(0:10)/10,border="white")hist(S[Y==1],col=rgb(0,0,1,.2),
     probability=TRUE,breaks=(0:10)/10,border="white",add=TRUE)lines(density(S[Y==0]),col="red",lwd=2,xlim=c(0,1))lines(density(S[Y==1]),col="blue",lwd=2)

The underlying idea is the following : we do have a “perfect classifier” (top left corner)

is the supports of the scores do not overlap

otherwise, we should have errors. That the case below

we in 10% of the cases, we might have misclassification

or even more missclassification, with overlapping supports

Now, we have the diagonal

when the two conditional distributions of the scores are identical

Of course, that only valid when n is very large, otherwise, it is only what we observe on average….

Exotic link functions for GLMs

In my previous post on GLMs, I discussed power link functions. But there are much more links that can be used :

  • The square root link (for the Poisson model)

Consider some random variable Y with mean \mu and variance \sigma^2. Using Taylor’s expansion,g(Y)\sim g(\mu)+(Y-\mu)g'(\mu)+\frac{1}{2}(Y-\mu)^2g''(\mu)we can write\mathbb{E}[g(Y)]\sim g(\mu)+\frac{\sigma^2}{2}g''(\mu) \text{Var}[g(Y)]\sim [g'(\mu)]^2\sigma^2

Assume that Y\sim\mathcal{P}(\lambda), a consider a square root transformation, g(y)=\sqrt{y}, then the second equality becomes \text{Var}[\sqrt{Y}]\sim \left[\frac{1}{2\sqrt{\mathbb{E}[Y]}}\right]^2\text{Var}[Y]=\frac{1}{4}

So, somehow, with a square-root transformation, we have variance stability, which might be interpreted as some homoscedasticity.

  • The complementary log-log function for the Bernoulli model

Assume that the true variable of interest is a Poisson one, N|\mathbf{X}=\mathbf{x}\sim\mathcal{P}(\lambda_{\mathbf{x}}) where \lambda_{\mathbf{x}}=\exp[\mathbf{x}^T\mathbf{\beta}]Thus,\mathbb{P}[N=0|\mathbf{X}=\mathbf{x}]=\exp[-\lambda_{\mathbf{x}}]=\exp[-(\exp[\mathbf{x}^T\mathbf{\beta}])]while\mathbb{P}[N>0|\mathbf{X}=\mathbf{x}]=1-\exp[-(\exp[\mathbf{x}^T\mathbf{\beta}])]=H(\mathbf{x}^T\mathbf{\beta})where H(s)=1-\exp[-\exp(s)]. Let Y=\mathbf{1}(N>0). The previous model seems like a Bernoulli regression with H as link function,\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=H(\mathbf{x}^T\mathbf{\beta})

So, assume now that instead of observing N we observe Y=\boldsymbol{1}(N>0). In that case, running a Bernoulli regression with a complementary log-log link function would be the same (?) as running first a Poisson regression on the original data, and then use it on our binary variable, zero vs. non-zero. Let us generate some data, and see what’s going on. Let us compare e^{\lambda_{\mathbf{x}}} and p_{\mathbf{x}} obtained from a standard logistic regression

n=563
set.seed(1)
base=data.frame(X1=rnorm(n),X2=rnorm(n))
lambda=base$X1+base$X2
base$Y=rpois(n,exp(lambda))
regPois = glm(Y~.,data=base,family=poisson(link="log"))
lambda = predict(regPois,type="response")
regBinom = glm((Y==0)~.,data=base,family=binomial(link="probit"))
prob = predict(regBinom, type="response")
plot(prob,exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

What if p_{\mathbf{x}} was obtained from a Bernoulli regression, with a cloglog link function ?

regBinom = glm((Y>0)~.,data=base,family=binomial(link="cloglog"))
prob = predict(regBinom, type="response")
plot(prob,1-exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

It looks like the fit is very good here ! Now, what if we have real data, like the dataset from A Theory of Extramarital Affairs, by Ray Fair, published in 1978 in the Journal of Political Economy (with 563 observations, and nine variables)

base = read.table("http://freakonometrics.free.fr/baseaffairs.txt",header=TRUE)
str(base)
x=base$SEX
base$SEX="M"
base$SEX[x=="0"]="F"
x=base$CHILDREN
base$CHILDREN="YES"
base$CHILDREN[x==0]="NO"
regPois = glm(Y~.,data=base,family=poisson(link="log"))
lambda = predict(regPois,type="response")
regBinom = glm((Y==0)~.,data=base,family=binomial(link="probit"))
prob = predict(regBinom, type="response")
plot(prob,exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

In that case the two models are very different. But actually, so is the second one

regBinom = glm((Y>0)~.,data=base,family=binomial(link="cloglog"))
prob = predict(regBinom, type="response")
plot(prob,1-exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

How can we interpret that ? Could it be because the Poisson model is not good ? Actually, if we run a zero-inflated model here,

library(pscl)
regZIP = zeroinfl(Y ~ . | ., data = base)
summary(regZIP)
 
Count model coefficients (poisson with log link):
             Estimate Std. Error z value Pr(>|z|)    
(Intercept) -0.002274   0.048413  -0.047    0.963    
X1           1.019814   0.026186  38.945   <2e-16 ***
X2           1.004814   0.024172  41.570   <2e-16 *** 
Zero-inflation model coefficients (binomial with logit link): 
            Estimate Std. Error z value Pr(>|z|)  
(Intercept) -4.90190    2.07846  -2.358   0.0184 *
X1          -2.00227    0.86897  -2.304   0.0212 *
X2          -0.01545    0.96121  -0.016   0.9872  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Hence, we reject here the Poisson distribution assumption, because of the inflation of zeros… It looks like the cloglog link can be used to check if the Poisson distribution is a good model, or not…

GLMs: link vs. distribution

Usually, when I give a course on GLMs, I try to insist on the fact that the link function is probably more important than the distribution. In order to illustrate, consider the following dataset, with 5 observations

x = c(1,2,3,4,5)
y = c(1,2,4,2,6)
base = data.frame(x,y)

Then consider several model, with various distributions, and either an identity link (and in that case \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbf{x}^T\mathbf{\beta}) or a log link function (so that \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=e^{\mathbf{x}^T\mathbf{\beta}})

regNId = glm(y~x,family=gaussian(link="identity"),data=base)
regNlog = glm(y~x,family=gaussian(link="log"),data=base)
regPId = glm(y~x,family=poisson(link="identity"),data=base)
regPlog = glm(y~x,family=poisson(link="log"),data=base)
regGId = glm(y~x,family=Gamma(link="identity"),data=base)
regGlog = glm(y~x,family=Gamma(link="log"),data=base)
regIGId = glm(y~x,family=inverse.gaussian(link="identity"),data=base)
regIGlog = glm(y~x,family=inverse.gaussian(link="log"),data=base

One can also consider some Tweedie distribution, to be even more general

library(statmod)
regTwId = glm(y~x,family=tweedie(var.power=1.5,link.power=1),data=base)
regTwlog = glm(y~x,family=tweedie(var.power=1.5,link.power=0),data=base)

Consider the prediction obtained in the first case, with the linear link function

library(RColorBrewer)
darkcols = brewer.pal(8, "Dark2")
plot(x,y,pch=19)
abline(regNId,col=darkcols[1])
abline(regPId,col=darkcols[2])
abline(regGId,col=darkcols[3])
abline(regIGId,col=darkcols[4])
abline(regTwId,lty=2)

The predictions are very very close, aren’t they ? In the case of the exponential prediction, we obtain

plot(x,y,pch=19)
u=seq(.8,5.2,by=.01)
lines(u,predict(regNlog,newdata=data.frame(x=u),type="response"),col=darkcols[1])
lines(u,predict(regPlog,newdata=data.frame(x=u),type="response"),col=darkcols[2])
lines(u,predict(regGlog,newdata=data.frame(x=u),type="response"),col=darkcols[3])
lines(u,predict(regIGlog,newdata=data.frame(x=u),type="response"),col=darkcols[4])
lines(u,predict(regTwlog,newdata=data.frame(x=u),type="response"),lty=2)

We can actually look closer. For instance, in the linear case, consider the slope obtained with a Tweedie model (that will include all the parametric familes mentioned here, actually)

pente=function(gamma) summary(glm(y~x,family=tweedie(var.power=gamma,link.power=1),data=base))$coefficients[2,1:2]
Vgamma = seq(-.5,3.5,by=.05)
Vpente = Vectorize(pente)(Vgamma)
plot(Vgamma,Vpente[1,],type="l",lwd=3,ylim=c(.965,1.03),xlab="power",ylab="slope")

The slope here is always very very close to one ! Even more if we add a confidence interval

plot(Vgamma,Vpente[1,])
lines(Vgamma,Vpente[1,]+1.96*Vpente[2,],lty=2)
lines(Vgamma,Vpente[1,]-1.96*Vpente[2,],lty=2)

Heuristically, for the Gamma regression, or the Inverse Gaussian one, because the variance is a power of the prediction, if the prediction is small (here on the left), the variance should be small. So, on the left of the graph, the error should be small with a higher power for the variance function. And that’s indeed what we observe here

erreur=function(gamma) predict(glm(y~x,family=tweedie(var.power=gamma,link.power=1),data=base),newdata=data.frame(x=1),type="response")-y[x==1] 
Verreur = Vectorize(erreur)(Vgamma)
plot(Vgamma,Verreur,type="l",lwd=3,ylim=c(-.1,.04),xlab="power",ylab="error")
abline(h=0,lty=2)

Of course, we can do the same with the exponential models

pente=function(gamma) summary(glm(y~x,family=tweedie(var.power=gamma,link.power=0),data=base))$coefficients[2,1:2]
Vpente = Vectorize(pente)(Vgamma)
plot(Vgamma,Vpente[1,],type="l",lwd=3)

or, if we add the confidence bands, we obtain

plot(Vgamma,Vpente[1,],ylim=c(0,.8),type="l",lwd=3,xlab="power",ylab="slope")
lines(Vgamma,Vpente[1,]+1.96*Vpente[2,],lty=2)
lines(Vgamma,Vpente[1,]-1.96*Vpente[2,],lty=2)

So here also, the “slope” is rather similar… And if we look at the error we make on the left part of the graph, we obtain

erreur=function(gamma) predict(glm(y~x,family=tweedie(var.power=gamma,link.power=0),data=base),newdata=data.frame(x=1),type="response")-y[x==1] 
Verreur = Vectorize(erreur)(Vgamma)
plot(Vgamma,Verreur,type="l",lwd=3,ylim=c(.001,.32),xlab="power",ylab="error")

So my point is that the distribution is usually not the most important point on GLMs, even if chapters of books on GLMs are distribution based… But as mentioned in an another post, if you consider a nonlinear transformation, like we have with GAMs, the story is more complicated…

Bailey (1963) and Poisson regression on two factors

Consider the following dataset, from A Theory of Extramarital Affairs, by Ray Fair, published in 1978 in the Journal of Political Economy, with 563 observations, and nine variables : eight covariates, and the variable of interest, the number of extramarital affairs, over a year,

base = read.table("http://freakonometrics.free.fr/baseaffairs.txt",header=TRUE)
str(base)
'data.frame':	563 obs. of  9 variables:
 $ SEX         : int  1 0 0 1 1 0 0 1 0 1 ...
 $ AGE         : num  37 27 32 57 22 32 22 57 32 22 ...
 $ YEARMARRIAGE: num  10 4 15 15 0.75 1.5 0.75 15 15 1.5 ...
 $ CHILDREN    : int  0 0 1 1 0 0 0 1 1 0 ...
 $ RELIGIOUS   : int  3 4 1 5 2 2 2 2 4 4 ...
 $ EDUCATION   : int  18 14 12 18 17 17 12 14 16 14 ...
 $ OCCUPATION  : int  7 6 1 6 6 5 1 4 1 4 ...
 $ SATISFACTION: int  4 4 4 5 3 5 3 4 2 5 ...
 $ Y           : int  0 0 0 0 0 0 0 0 0 0 ...

Let us focus on two categorical covariates, related to the importance of religion, and the occupation

df=data.frame(y=base$Y,
              religion=as.factor(base$RELIGIOUS),
              occupation=as.factor(base$OCCUPATION),
              expo = 1)
(E=xtabs(expo~religion+occupation,data=df))
        occupation
religion  1  2  3  4  5  6  7
       1  4  1  8  4 16  9  0
       2 23  3 11 17 56 36  6
       3 29  1 10 12 39 25  2
       4 38  7 12 21 59 44  2
       5 13  1  3 10 19 19  3
(N=xtabs(y~religion+occupation,data=df))
        occupation
religion  1  2  3  4  5  6  7
       1  4  1 13  3 13  7  0
       2  1  1 13 10 25 43 10
       3 15  0 12 11 34 35  1
       4 24  1  3 15 11  9 10
       5  6  0  0  6 11  7  0

The two tables above are the exposure (number of observations) and the number of extramarital affairs, here as contingency tables. Without any covariate, one can assume that N\sim\mathcal{P}(\lambda\cdot E), where \lambda would be

sum(N)/sum(E)
[1] 0.6305506

The idea with the margin method is to assume that N_{i,j}=E_{i,j}\cdot\lambda_{i,j} where \lambda_{i,j}=A_i\cdot B_j. Bailey (1963) added two series of constraints : per row, \sum_j N_{i,j}=\sum_j E_{i,j}\cdot A_i\cdot B_j for any i and similarly, for any j \sum_i N_{i,j}=\sum_i E_{i,j}\cdot A_i\cdot B_jFrom the first series of constraints, write A_i=\frac{\sum_j N_{i,j}}{\sum_j E_{i,j}\cdot B_j} and use the second series to write B_j=\frac{\sum_i N_{i,j}}{\sum_i E_{i,j}\cdot A_i}Because we need A_i‘s to compute B_j‘s, and conversely, it is natural to consider some iterative procedure to solve it. Observe that we do not have unicity…

Consider here some starting values for A_i‘s and B_j‘s

A=rep(1,length(levels(df$religion)))
B=rep(1,length(levels(df$occupation)))*sum(N)/sum(E)
A
[1] 1 1 1 1 1
B
[1] 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506

The predicted number of extramarital affairs would be \hat N_{i,j}=E_{i,j}\cdot\hat A_i\cdot \hat B_j

E * A%*%t(B)
        occupation
religion          1          2          3          4          5          6          7
       1  2.5222025  0.6305506  5.0444050  2.5222025 10.0888099  5.6749556  0.0000000
       2 14.5026643  1.8916519  6.9360568 10.7193606 35.3108348 22.6998224  3.7833037
       3 18.2859680  0.6305506  6.3055062  7.5666075 24.5914742 15.7637655  1.2611012
       4 23.9609236  4.4138544  7.5666075 13.2415631 37.2024867 27.7442274  1.2611012
       5  8.1971581  0.6305506  1.8916519  6.3055062 11.9804618 11.9804618  1.8916519
sum(B*E[1,])
[1] 26.48313
sum(B*E[2,])
[1] 95.84369
apply(t(B*t(E)),1,sum)
        1         2         3         4         5 
 26.48313  95.84369  74.40497 115.39076  42.87744 
sum(A*E[,1])
[1] 107
sum(A*E[,2])
[1] 13
apply(A*E,2,sum)
  1   2   3   4   5   6   7 
107  13  44  64 189 133  13

From expressions above, observe that one can very easily write expressions of A_i‘s and B_j‘s as functions of B_j‘s and A_i‘s respectively

A=apply(N,1,sum)/apply(t(B*t(E)),1,sum)
B=apply(N,2,sum)/apply(A*E,2,sum)

Let it iterate one thousand times

for(i in 1:1000){
  A=apply(N,1,sum)/apply(t(B*t(E)),1,sum)
  B=apply(N,2,sum)/apply(A*E,2,sum)
}

We obtain here

A
        1         2         3         4         5 
1.5404346 1.0447195 1.4825650 0.6553159 0.6634763 
B
        1         2         3         4         5         6         7 
0.4685515 0.2629769 0.8454435 0.7245310 0.4889697 0.7770553 1.6753750 
E * A%*%t(B)
        occupation
religion          1          2          3          4          5          6          7
       1  2.8870914  0.4050987 10.4188024  4.4643702 12.0516123 10.7730250  0.0000000
       2 11.2586111  0.8242113  9.7157637 12.8678376 28.6068235 29.2249717 10.5017811
       3 20.1450811  0.3898804 12.5342484 12.8899708 28.2722423 28.8008726  4.9677044
       4 11.6678702  1.2063307  6.6483904  9.9707299 18.9053460 22.4055332  2.1957997
       5  4.0413463  0.1744790  1.6827951  4.8070914  6.1639760  9.7955975  3.3347148

That is our prediction, per category, of the number of affairs. Observe that here, sums per row are equal to observed numbers,

apply(N,1,sum)
  1   2   3   4   5 
 41 103 108  73  30 
apply(E * A%*%t(B),1,sum)
  1   2   3   4   5 
 41 103 108  73  30

as well as sums per colums

apply(N,2,sum)
  1   2   3   4   5   6   7 
 50   3  41  45  94 101  21 
apply(E * A%*%t(B),2,sum)
  1   2   3   4   5   6   7 
 50   3  41  45  94 101  21

Now, why should I mention that here, in the section on the Poisson regression in our course ? Because actually, this is exactly what we get if we run a Poisson regression on those two covariates

reg=glm(y~religion+occupation,data=df,family=poisson)
summary(reg)
Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -0.32604    0.21325  -1.529 0.126285    
religion2   -0.38832    0.18791  -2.066 0.038783 *  
religion3   -0.03829    0.18585  -0.206 0.836771    
religion4   -0.85470    0.19757  -4.326 1.52e-05 ***
religion5   -0.84233    0.24416  -3.450 0.000561 ***
occupation2 -0.57758    0.59549  -0.970 0.332083    
occupation3  0.59022    0.21349   2.765 0.005699 ** 
occupation4  0.43588    0.20603   2.116 0.034381 *  
occupation5  0.04265    0.17590   0.242 0.808399    
occupation6  0.50587    0.17360   2.914 0.003569 ** 
occupation7  1.27415    0.26298   4.845 1.27e-06 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1

First of all, observe that the total sum of predictions equals the total sum of observations

yp = predict(reg,type="response")
sum(yp)
[1] 355
sum(df$y)
[1] 355

But actually, the predicted number of affairs, for our 35 classes, is exactly what we got using Bailey’s technique

xtabs(yp~df$religion+df$occupation)
           df$occupation
df$religion          1          2          3          4          5          6          7
          1  2.8870914  0.4050987 10.4188024  4.4643702 12.0516123 10.7730250  0.0000000
          2 11.2586112  0.8242113  9.7157637 12.8678376 28.6068235 29.2249717 10.5017811
          3 20.1450813  0.3898804 12.5342484 12.8899708 28.2722424 28.8008726  4.9677044
          4 11.6678703  1.2063307  6.6483904  9.9707300 18.9053460 22.4055332  2.1957997
          5  4.0413464  0.1744790  1.6827951  4.8070914  6.1639761  9.7955975  3.3347148
E * A%*%t(B)
        occupation
religion          1          2          3          4          5          6          7
       1  2.8870914  0.4050987 10.4188024  4.4643702 12.0516123 10.7730250  0.0000000
       2 11.2586111  0.8242113  9.7157637 12.8678376 28.6068235 29.2249717 10.5017811
       3 20.1450811  0.3898804 12.5342484 12.8899708 28.2722423 28.8008726  4.9677044
       4 11.6678702  1.2063307  6.6483904  9.9707299 18.9053460 22.4055332  2.1957997
       5  4.0413463  0.1744790  1.6827951  4.8070914  6.1639760  9.7955975  3.3347148

To be more specific, up to a multiplicate constant, series of coefficients are equal here, e.g. for A_i‘s

a=exp(coefficients(reg)[1]+c(0,coefficients(reg)[2:5]))
a/a[1]
          religion2 religion3 religion4 religion5 
1.0000000 0.6781979 0.9624329 0.4254098 0.4307072 
A/A[1]
        1         2         3         4         5 
1.0000000 0.6781979 0.9624329 0.4254098 0.4307072

but also for B_j‘s

b=exp(coefficients(reg)[1]+c(0,coefficients(reg)[6:11]))
b/b[1]
            occupation2 occupation3 occupation4 occupation5 occupation6 occupation7 
  1.0000000   0.5612551   1.8043769   1.5463210   1.0435773   1.6584203   3.5756477 
B/B[1]
        1         2         3         4         5         6         7 
1.0000000 0.5612551 1.8043770 1.5463210 1.0435773 1.6584203 3.5756478

This will have major implications in non-life insurance models (for claims reserving).

Variable explicative dans un intervalle

Suite a une question posée ce matin en cours, je vais faire un rapide billet pour expliquer comment extraire les valeurs inférieures et supérieures quand on a des intervalles, sous R. Commençons par générer des données,

n=200
set.seed(123)
X=rnorm(n)
Y=2+X+rnorm(n,sd = .3)

Supposons maintenant que l’on n’observe plus la vraie variable x mais juste une classe (on va créer huit classes, avec chacune un huitième des observations)

Q=quantile(x = X,(0:8)/8)
Q[1]=Q[1]-.00001
Xcut=cut(X,breaks = Q)
B=data.frame(Y=Y,X=Xcut)

Par exemple, pour la premiere valeur, on a

as.character(Xcut[1])
[1] "(-0.626,-0.348]"

Pour extraire des informations sur ces bornes, on peut utiliser le petit code suivant, qui renvoie la borne inférieure, la borne supérieur, et le milieu de l’intervalle

extraire = function(x){
  ax=as.character(x)
  lower1 = as.numeric( sub("\\((.+),.*", "\\1", ax) )
  lower2 = as.numeric( sub("\\[(.+),.*", "\\1", ax) )
  upper1 = as.numeric( sub("[^,]*,([^]]*)\\]", "\\1", ax) )
  upper2 = as.numeric( sub("[^,]*,([^]]*)\\)", "\\1", ax) )
  lower = c(lower1,lower2)
  lower=lower[!is.na(lower)]
  upper = c(upper1,upper2)
  upper=upper[!is.na(upper)]
  mid   = (lower+upper)/2
  return(c(lower=lower,mid=mid,upper=upper))
}

On peut vérifier sur notre première observation

extraire(Xcut[1])
 lower    mid  upper 
-0.626 -0.487 -0.348

Juste pour voir, on peut créer trois variables supplémentaires dans notre base (avec ces trois informations)

B2=Vectorize(function(i) extraire(Xcut[i]))(1:n)
B$lower=B2[1,]
B$mid  =B2[2,]
B$upper=B2[3,]

et on peut comparer 4 régressions (i) on régresse sur nos 8 classes, i.e. nos 8 facteurs (ii) on régresse sur la borne inférieure de l’intervalle, (iii) sur la valeur “moyenne” de l’intervalle (iv) sur la borne supérieure

regF=lm(Y~X,data=B)
regL=lm(Y~lower,data=B)
regM=lm(Y~mid,data=B)
regU=lm(Y~upper,data=B)

On peut comparer les prévisions avec nos quatre modeles

plot(B$Y,predict(regF),ylim=c(0,4))
points(B$Y,predict(regM),col="red")
points(B$Y,predict(regU),col="blue")
points(B$Y,predict(regL),col="purple")
abline(a=0,b=1,lty=2)

Pour aller plus loin, on peut aussi comparer les AIC de nos modèles,

AIC(regF)
[1] 204.5653
AIC(regM)
[1] 201.1201
AIC(regL)
[1] 266.5246
AIC(regU)
[1] 255.0687

Si l’utilisation des bornes inférieures et supérieures n’est pas concluante, ici, on notera qu’utiliser la valeur moyenne de l’intervalle donne des résultats un peu meilleurs que d’utiliser 8 facteurs.

Monte Carlo techniques to create counterfactuals

In the previous STT5100 course, last week, we’ve seen how to use monte carlo simulations. The idea is that we do observe in statistics a sample \{y_1,\cdots,y_n\}, and more generally, in econometrics \{(y_1,\mathbf{x}_1),\cdots,(y_n,\mathbf{x}_n)\}. But let’s get back to statistics (without covariates) to illustrate. We assume that observations y_i are realizations of an underlying random variable Y_i. We assume that Y_i are i.id. random variables, with (unkown) distribution F_{\theta}. Consider here some estimator \widehat{\theta} – which is just a function of our sample \widehat{\theta}=h(y_1,\cdots,y_n). So \widehat{\theta} is a real-valued number like . Then, in mathematical statistics, in order to derive properties of the estimator \widehat{\theta}, like a confidence interval, we must define \widehat{\theta}=h(Y_1,\cdots,Y_n), so that now, \widehat{\theta} is a real-valued random variable. What is puzzling for students, is that we use the same notation, and I have to agree, that’s not very clever. So now, \widehat{\theta} is .

There are two strategies here. In classical statistics, we use probability theorem, to derive properties of \widehat{\theta} (the random variable) : at least the first two moments, but if possible the distribution. An alternative is to go for computational statistics. We have only one sample, \{y_1,\cdots,y_n\}, and that’s a pity. But maybe we can create another one \{y_1^{(1)},\cdots,y_n^{(1)}\}, as realizations of F_{\theta}, and another one \{y_1^{(2)},\cdots,y_n^{(2)}\}, anoter one \{y_1^{(3)},\cdots,y_n^{(3)}\}, etc. From those counterfactuals, we can now get a collection of estimators, \widehat{\theta}^{(1)},\widehat{\theta}^{(2)}, \widehat{\theta}^{(3)}, etc. Instead of using mathematical tricks to calculate \mathbb{E}(\widehat{\theta}), compute \frac{1}{k}\sum_{s=1}^k\widehat{\theta}^{(s)}That’s what we’ve seen last friday.

I did also mention briefly that looking at densities is lovely, but not very useful to assess goodness of fit, to test for normality, for instance. In this post, I just wanted to illustrate this point. And actually, creating counterfactuals can we a good way to see it. Consider here the height of male students,

Davis=read.table(
  "http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")
Davis[12,c(2,3)]=Davis[12,c(3,2)]
X=Davis$height[Davis$sex=="M"]

We can visualize its distribution (density and cumulative distribution)

u=seq(155,205,by=.5)
par(mfrow=c(1,2))
hist(X,col=rgb(0,0,1,.3))
lines(density(X),col="blue",lwd=2)
lines(u,dnorm(u,178,6.5),col="black")
Xs=sort(X)
n=length(X)
p=(1:n)/(n+1)
plot(Xs,p,type="s",col="blue")
lines(u,pnorm(u,178,6.5),col="black")

Since it looks like a normal distribution, we can add the density a Gaussian distribution on the left, and the cdf on the right. Why not test it properly. To be a little bit more specific, I do not want to test if it’s a Gaussian distribution, but if it’s a \mathcal{N}(178,6.5^2). In order to see if this distribution is relevant, one can use monte carlo simulations to create conterfactuals

hist(X,col=rgb(0,0,1,.3))
lines(density(X),col="blue",lwd=2)
  Y=rnorm(n,178,6.5)
  hist(Y,col=rgb(1,0,0,.3))
  lines(density(Y),col="red",lwd=2)
Ys=sort(Y)
plot(Xs,p,type="s",col="white",lwd=2,axes=FALSE,xlab="",ylab="",xlim=c(155,205))
polygon(c(Xs,rev(Ys)),c(p,rev(p)),col="yellow",border=NA)
lines(Xs,p,type="s",col="blue",lwd=2)
lines(Ys,p,type="s",col="red",lwd=2)

We can see on the left that it is hard to assess normality from the density (histogram and also kernel based density estimator). One can hardly think of a valid distance, between two densities. But if we look at graph on the right, we can compare the empirical distribution cumulative distribution \widehat{F} obtained from \{y_1,\cdots,y_n\} (the blue curve), and some conterfactual, \widehat{F}^{(s)} obtained from \{y_1^{(s)},\cdots,y_n^{(s)}\} generated from F_{\theta_0} – where \theta_0 is the value we want to test. As suggested above, we can compute the yellow area, as suggest in Cramer-von Mises test, or the Kolmogorov-Smirnov distance.

d=rep(NA,1e5)
for(s in 1:1e5){
d[s]=ks.test(rnorm(n,178,6.5),"pnorm",178,6.5)$statistic
}
ds=density(d)
plot(ds,xlab="",ylab="")
dks=ks.test(X,"pnorm",178,6.5)$statistic
id=which(ds$x>dks)
polygon(c(ds$x[id],rev(ds$x[id])),c(ds$y[id],rep(0,length(id))),col=rgb(1,0,0,.4),border=NA)
abline(v=dks,col="red")

If we draw 10,000 counterfactual samples, we can visualize the distribution (here the density) of the distance used a test statistic \widehat{d}^{(1)}, \widehat{d}^{(2)}, etc, and compare it with the one observe on our sample \widehat{d}. The proportion of samples where the test-statistics exceeded the one observed

mean(d>dks)
[1] 0.78248

is the computational version of the p-value

ks.test(X,"pnorm",178,6.5)
 
	One-sample Kolmogorov-Smirnov test
 
data:  X
D = 0.068182, p-value = 0.8079
alternative hypothesis: two-sided

I thought about all that a couple of days ago, since I got invited for a panel discussion on “coding”, and why “coding” helped me as professor. And this is precisely why I like coding : in statistics, either manipulate abstract objects, like random variables, or you actually use some lines of code to create counterfactuals, and generate fake samples, to quantify uncertainty. The later is interesting, because it helps to visualize complex quantifies. I do not claim that maths is useless, but coding is really nice, as a starting point, to understand what we talk about (which can be very usefull when there is a lot of confusion on notations).