Tag Archives: STT5100

Bailey (1963) and Poisson regression on two factors

Consider the following dataset, from A Theory of Extramarital Affairs, by Ray Fair, published in 1978 in the Journal of Political Economy, with 563 observations, and nine variables : eight covariates, and the variable of interest, the number of extramarital affairs, over a year,

base = read.table("http://freakonometrics.free.fr/baseaffairs.txt",header=TRUE)
str(base)
'data.frame':	563 obs. of  9 variables:
 $ SEX         : int  1 0 0 1 1 0 0 1 0 1 ...
 $ AGE         : num  37 27 32 57 22 32 22 57 32 22 ...
 $ YEARMARRIAGE: num  10 4 15 15 0.75 1.5 0.75 15 15 1.5 ...
 $ CHILDREN    : int  0 0 1 1 0 0 0 1 1 0 ...
 $ RELIGIOUS   : int  3 4 1 5 2 2 2 2 4 4 ...
 $ EDUCATION   : int  18 14 12 18 17 17 12 14 16 14 ...
 $ OCCUPATION  : int  7 6 1 6 6 5 1 4 1 4 ...
 $ SATISFACTION: int  4 4 4 5 3 5 3 4 2 5 ...
 $ Y           : int  0 0 0 0 0 0 0 0 0 0 ...

Let us focus on two categorical covariates, related to the importance of religion, and the occupation

df=data.frame(y=base$Y,
              religion=as.factor(base$RELIGIOUS),
              occupation=as.factor(base$OCCUPATION),
              expo = 1)
(E=xtabs(expo~religion+occupation,data=df))
        occupation
religion  1  2  3  4  5  6  7
       1  4  1  8  4 16  9  0
       2 23  3 11 17 56 36  6
       3 29  1 10 12 39 25  2
       4 38  7 12 21 59 44  2
       5 13  1  3 10 19 19  3
(N=xtabs(y~religion+occupation,data=df))
        occupation
religion  1  2  3  4  5  6  7
       1  4  1 13  3 13  7  0
       2  1  1 13 10 25 43 10
       3 15  0 12 11 34 35  1
       4 24  1  3 15 11  9 10
       5  6  0  0  6 11  7  0

The two tables above are the exposure (number of observations) and the number of extramarital affairs, here as contingency tables. Without any covariate, one can assume that N\sim\mathcal{P}(\lambda\cdot E), where \lambda would be

sum(N)/sum(E)
[1] 0.6305506

The idea with the margin method is to assume that N_{i,j}=E_{i,j}\cdot\lambda_{i,j} where \lambda_{i,j}=A_i\cdot B_j. Bailey (1963) added two series of constraints : per row, \sum_j N_{i,j}=\sum_j E_{i,j}\cdot A_i\cdot B_j for any i and similarly, for any j \sum_i N_{i,j}=\sum_i E_{i,j}\cdot A_i\cdot B_jFrom the first series of constraints, write A_i=\frac{\sum_j N_{i,j}}{\sum_j E_{i,j}\cdot B_j} and use the second series to write B_j=\frac{\sum_i N_{i,j}}{\sum_i E_{i,j}\cdot A_i}Because we need A_i‘s to compute B_j‘s, and conversely, it is natural to consider some iterative procedure to solve it. Observe that we do not have unicity…

Consider here some starting values for A_i‘s and B_j‘s

A=rep(1,length(levels(df$religion)))
B=rep(1,length(levels(df$occupation)))*sum(N)/sum(E)
A
[1] 1 1 1 1 1
B
[1] 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506

The predicted number of extramarital affairs would be \hat N_{i,j}=E_{i,j}\cdot\hat A_i\cdot \hat B_j

E * A%*%t(B)
        occupation
religion          1          2          3          4          5          6          7
       1  2.5222025  0.6305506  5.0444050  2.5222025 10.0888099  5.6749556  0.0000000
       2 14.5026643  1.8916519  6.9360568 10.7193606 35.3108348 22.6998224  3.7833037
       3 18.2859680  0.6305506  6.3055062  7.5666075 24.5914742 15.7637655  1.2611012
       4 23.9609236  4.4138544  7.5666075 13.2415631 37.2024867 27.7442274  1.2611012
       5  8.1971581  0.6305506  1.8916519  6.3055062 11.9804618 11.9804618  1.8916519
sum(B*E[1,])
[1] 26.48313
sum(B*E[2,])
[1] 95.84369
apply(t(B*t(E)),1,sum)
        1         2         3         4         5 
 26.48313  95.84369  74.40497 115.39076  42.87744 
sum(A*E[,1])
[1] 107
sum(A*E[,2])
[1] 13
apply(A*E,2,sum)
  1   2   3   4   5   6   7 
107  13  44  64 189 133  13

From expressions above, observe that one can very easily write expressions of A_i‘s and B_j‘s as functions of B_j‘s and A_i‘s respectively

A=apply(N,1,sum)/apply(t(B*t(E)),1,sum)
B=apply(N,2,sum)/apply(A*E,2,sum)

Let it iterate one thousand times

for(i in 1:1000){
  A=apply(N,1,sum)/apply(t(B*t(E)),1,sum)
  B=apply(N,2,sum)/apply(A*E,2,sum)
}

We obtain here

A
        1         2         3         4         5 
1.5404346 1.0447195 1.4825650 0.6553159 0.6634763 
B
        1         2         3         4         5         6         7 
0.4685515 0.2629769 0.8454435 0.7245310 0.4889697 0.7770553 1.6753750 
E * A%*%t(B)
        occupation
religion          1          2          3          4          5          6          7
       1  2.8870914  0.4050987 10.4188024  4.4643702 12.0516123 10.7730250  0.0000000
       2 11.2586111  0.8242113  9.7157637 12.8678376 28.6068235 29.2249717 10.5017811
       3 20.1450811  0.3898804 12.5342484 12.8899708 28.2722423 28.8008726  4.9677044
       4 11.6678702  1.2063307  6.6483904  9.9707299 18.9053460 22.4055332  2.1957997
       5  4.0413463  0.1744790  1.6827951  4.8070914  6.1639760  9.7955975  3.3347148

That is our prediction, per category, of the number of affairs. Observe that here, sums per row are equal to observed numbers,

apply(N,1,sum)
  1   2   3   4   5 
 41 103 108  73  30 
apply(E * A%*%t(B),1,sum)
  1   2   3   4   5 
 41 103 108  73  30

as well as sums per colums

apply(N,2,sum)
  1   2   3   4   5   6   7 
 50   3  41  45  94 101  21 
apply(E * A%*%t(B),2,sum)
  1   2   3   4   5   6   7 
 50   3  41  45  94 101  21

Now, why should I mention that here, in the section on the Poisson regression in our course ? Because actually, this is exactly what we get if we run a Poisson regression on those two covariates

reg=glm(y~religion+occupation,data=df,family=poisson)
summary(reg)
Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -0.32604    0.21325  -1.529 0.126285    
religion2   -0.38832    0.18791  -2.066 0.038783 *  
religion3   -0.03829    0.18585  -0.206 0.836771    
religion4   -0.85470    0.19757  -4.326 1.52e-05 ***
religion5   -0.84233    0.24416  -3.450 0.000561 ***
occupation2 -0.57758    0.59549  -0.970 0.332083    
occupation3  0.59022    0.21349   2.765 0.005699 ** 
occupation4  0.43588    0.20603   2.116 0.034381 *  
occupation5  0.04265    0.17590   0.242 0.808399    
occupation6  0.50587    0.17360   2.914 0.003569 ** 
occupation7  1.27415    0.26298   4.845 1.27e-06 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1

First of all, observe that the total sum of predictions equals the total sum of observations

yp = predict(reg,type="response")
sum(yp)
[1] 355
sum(df$y)
[1] 355

But actually, the predicted number of affairs, for our 35 classes, is exactly what we got using Bailey’s technique

xtabs(yp~df$religion+df$occupation)
           df$occupation
df$religion          1          2          3          4          5          6          7
          1  2.8870914  0.4050987 10.4188024  4.4643702 12.0516123 10.7730250  0.0000000
          2 11.2586112  0.8242113  9.7157637 12.8678376 28.6068235 29.2249717 10.5017811
          3 20.1450813  0.3898804 12.5342484 12.8899708 28.2722424 28.8008726  4.9677044
          4 11.6678703  1.2063307  6.6483904  9.9707300 18.9053460 22.4055332  2.1957997
          5  4.0413464  0.1744790  1.6827951  4.8070914  6.1639761  9.7955975  3.3347148
E * A%*%t(B)
        occupation
religion          1          2          3          4          5          6          7
       1  2.8870914  0.4050987 10.4188024  4.4643702 12.0516123 10.7730250  0.0000000
       2 11.2586111  0.8242113  9.7157637 12.8678376 28.6068235 29.2249717 10.5017811
       3 20.1450811  0.3898804 12.5342484 12.8899708 28.2722423 28.8008726  4.9677044
       4 11.6678702  1.2063307  6.6483904  9.9707299 18.9053460 22.4055332  2.1957997
       5  4.0413463  0.1744790  1.6827951  4.8070914  6.1639760  9.7955975  3.3347148

To be more specific, up to a multiplicate constant, series of coefficients are equal here, e.g. for A_i‘s

a=exp(coefficients(reg)[1]+c(0,coefficients(reg)[2:5]))
a/a[1]
          religion2 religion3 religion4 religion5 
1.0000000 0.6781979 0.9624329 0.4254098 0.4307072 
A/A[1]
        1         2         3         4         5 
1.0000000 0.6781979 0.9624329 0.4254098 0.4307072

but also for B_j‘s

b=exp(coefficients(reg)[1]+c(0,coefficients(reg)[6:11]))
b/b[1]
            occupation2 occupation3 occupation4 occupation5 occupation6 occupation7 
  1.0000000   0.5612551   1.8043769   1.5463210   1.0435773   1.6584203   3.5756477 
B/B[1]
        1         2         3         4         5         6         7 
1.0000000 0.5612551 1.8043770 1.5463210 1.0435773 1.6584203 3.5756478

This will have major implications in non-life insurance models (for claims reserving).

Variable explicative dans un intervalle

Suite a une question posée ce matin en cours, je vais faire un rapide billet pour expliquer comment extraire les valeurs inférieures et supérieures quand on a des intervalles, sous R. Commençons par générer des données,

n=200
set.seed(123)
X=rnorm(n)
Y=2+X+rnorm(n,sd = .3)

Supposons maintenant que l’on n’observe plus la vraie variable x mais juste une classe (on va créer huit classes, avec chacune un huitième des observations)

Q=quantile(x = X,(0:8)/8)
Q[1]=Q[1]-.00001
Xcut=cut(X,breaks = Q)
B=data.frame(Y=Y,X=Xcut)

Par exemple, pour la premiere valeur, on a

as.character(Xcut[1])
[1] "(-0.626,-0.348]"

Pour extraire des informations sur ces bornes, on peut utiliser le petit code suivant, qui renvoie la borne inférieure, la borne supérieur, et le milieu de l’intervalle

extraire = function(x){
  ax=as.character(x)
  lower1 = as.numeric( sub("\\((.+),.*", "\\1", ax) )
  lower2 = as.numeric( sub("\\[(.+),.*", "\\1", ax) )
  upper1 = as.numeric( sub("[^,]*,([^]]*)\\]", "\\1", ax) )
  upper2 = as.numeric( sub("[^,]*,([^]]*)\\)", "\\1", ax) )
  lower = c(lower1,lower2)
  lower=lower[!is.na(lower)]
  upper = c(upper1,upper2)
  upper=upper[!is.na(upper)]
  mid   = (lower+upper)/2
  return(c(lower=lower,mid=mid,upper=upper))
}

On peut vérifier sur notre première observation

extraire(Xcut[1])
 lower    mid  upper 
-0.626 -0.487 -0.348

Juste pour voir, on peut créer trois variables supplémentaires dans notre base (avec ces trois informations)

B2=Vectorize(function(i) extraire(Xcut[i]))(1:n)
B$lower=B2[1,]
B$mid  =B2[2,]
B$upper=B2[3,]

et on peut comparer 4 régressions (i) on régresse sur nos 8 classes, i.e. nos 8 facteurs (ii) on régresse sur la borne inférieure de l’intervalle, (iii) sur la valeur “moyenne” de l’intervalle (iv) sur la borne supérieure

regF=lm(Y~X,data=B)
regL=lm(Y~lower,data=B)
regM=lm(Y~mid,data=B)
regU=lm(Y~upper,data=B)

On peut comparer les prévisions avec nos quatre modeles

plot(B$Y,predict(regF),ylim=c(0,4))
points(B$Y,predict(regM),col="red")
points(B$Y,predict(regU),col="blue")
points(B$Y,predict(regL),col="purple")
abline(a=0,b=1,lty=2)

Pour aller plus loin, on peut aussi comparer les AIC de nos modèles,

AIC(regF)
[1] 204.5653
AIC(regM)
[1] 201.1201
AIC(regL)
[1] 266.5246
AIC(regU)
[1] 255.0687

Si l’utilisation des bornes inférieures et supérieures n’est pas concluante, ici, on notera qu’utiliser la valeur moyenne de l’intervalle donne des résultats un peu meilleurs que d’utiliser 8 facteurs.

Monte Carlo techniques to create counterfactuals

In the previous STT5100 course, last week, we’ve seen how to use monte carlo simulations. The idea is that we do observe in statistics a sample \{y_1,\cdots,y_n\}, and more generally, in econometrics \{(y_1,\mathbf{x}_1),\cdots,(y_n,\mathbf{x}_n)\}. But let’s get back to statistics (without covariates) to illustrate. We assume that observations y_i are realizations of an underlying random variable Y_i. We assume that Y_i are i.id. random variables, with (unkown) distribution F_{\theta}. Consider here some estimator \widehat{\theta} – which is just a function of our sample \widehat{\theta}=h(y_1,\cdots,y_n). So \widehat{\theta} is a real-valued number like . Then, in mathematical statistics, in order to derive properties of the estimator \widehat{\theta}, like a confidence interval, we must define \widehat{\theta}=h(Y_1,\cdots,Y_n), so that now, \widehat{\theta} is a real-valued random variable. What is puzzling for students, is that we use the same notation, and I have to agree, that’s not very clever. So now, \widehat{\theta} is .

There are two strategies here. In classical statistics, we use probability theorem, to derive properties of \widehat{\theta} (the random variable) : at least the first two moments, but if possible the distribution. An alternative is to go for computational statistics. We have only one sample, \{y_1,\cdots,y_n\}, and that’s a pity. But maybe we can create another one \{y_1^{(1)},\cdots,y_n^{(1)}\}, as realizations of F_{\theta}, and another one \{y_1^{(2)},\cdots,y_n^{(2)}\}, anoter one \{y_1^{(3)},\cdots,y_n^{(3)}\}, etc. From those counterfactuals, we can now get a collection of estimators, \widehat{\theta}^{(1)},\widehat{\theta}^{(2)}, \widehat{\theta}^{(3)}, etc. Instead of using mathematical tricks to calculate \mathbb{E}(\widehat{\theta}), compute \frac{1}{k}\sum_{s=1}^k\widehat{\theta}^{(s)}That’s what we’ve seen last friday.

I did also mention briefly that looking at densities is lovely, but not very useful to assess goodness of fit, to test for normality, for instance. In this post, I just wanted to illustrate this point. And actually, creating counterfactuals can we a good way to see it. Consider here the height of male students,

Davis=read.table(
  "http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")
Davis[12,c(2,3)]=Davis[12,c(3,2)]
X=Davis$height[Davis$sex=="M"]

We can visualize its distribution (density and cumulative distribution)

u=seq(155,205,by=.5)
par(mfrow=c(1,2))
hist(X,col=rgb(0,0,1,.3))
lines(density(X),col="blue",lwd=2)
lines(u,dnorm(u,178,6.5),col="black")
Xs=sort(X)
n=length(X)
p=(1:n)/(n+1)
plot(Xs,p,type="s",col="blue")
lines(u,pnorm(u,178,6.5),col="black")

Since it looks like a normal distribution, we can add the density a Gaussian distribution on the left, and the cdf on the right. Why not test it properly. To be a little bit more specific, I do not want to test if it’s a Gaussian distribution, but if it’s a \mathcal{N}(178,6.5^2). In order to see if this distribution is relevant, one can use monte carlo simulations to create conterfactuals

hist(X,col=rgb(0,0,1,.3))
lines(density(X),col="blue",lwd=2)
  Y=rnorm(n,178,6.5)
  hist(Y,col=rgb(1,0,0,.3))
  lines(density(Y),col="red",lwd=2)
Ys=sort(Y)
plot(Xs,p,type="s",col="white",lwd=2,axes=FALSE,xlab="",ylab="",xlim=c(155,205))
polygon(c(Xs,rev(Ys)),c(p,rev(p)),col="yellow",border=NA)
lines(Xs,p,type="s",col="blue",lwd=2)
lines(Ys,p,type="s",col="red",lwd=2)

We can see on the left that it is hard to assess normality from the density (histogram and also kernel based density estimator). One can hardly think of a valid distance, between two densities. But if we look at graph on the right, we can compare the empirical distribution cumulative distribution \widehat{F} obtained from \{y_1,\cdots,y_n\} (the blue curve), and some conterfactual, \widehat{F}^{(s)} obtained from \{y_1^{(s)},\cdots,y_n^{(s)}\} generated from F_{\theta_0} – where \theta_0 is the value we want to test. As suggested above, we can compute the yellow area, as suggest in Cramer-von Mises test, or the Kolmogorov-Smirnov distance.

d=rep(NA,1e5)
for(s in 1:1e5){
d[s]=ks.test(rnorm(n,178,6.5),"pnorm",178,6.5)$statistic
}
ds=density(d)
plot(ds,xlab="",ylab="")
dks=ks.test(X,"pnorm",178,6.5)$statistic
id=which(ds$x>dks)
polygon(c(ds$x[id],rev(ds$x[id])),c(ds$y[id],rep(0,length(id))),col=rgb(1,0,0,.4),border=NA)
abline(v=dks,col="red")

If we draw 10,000 counterfactual samples, we can visualize the distribution (here the density) of the distance used a test statistic \widehat{d}^{(1)}, \widehat{d}^{(2)}, etc, and compare it with the one observe on our sample \widehat{d}. The proportion of samples where the test-statistics exceeded the one observed

mean(d>dks)
[1] 0.78248

is the computational version of the p-value

ks.test(X,"pnorm",178,6.5)
 
	One-sample Kolmogorov-Smirnov test
 
data:  X
D = 0.068182, p-value = 0.8079
alternative hypothesis: two-sided

I thought about all that a couple of days ago, since I got invited for a panel discussion on “coding”, and why “coding” helped me as professor. And this is precisely why I like coding : in statistics, either manipulate abstract objects, like random variables, or you actually use some lines of code to create counterfactuals, and generate fake samples, to quantify uncertainty. The later is interesting, because it helps to visualize complex quantifies. I do not claim that maths is useless, but coding is really nice, as a starting point, to understand what we talk about (which can be very usefull when there is a lot of confusion on notations).

Traitement des valeurs manquantes : remplacer les NA par une constante ?

Un rapide billet pour répondre à une question posée à la fin du cours de ce matin (ST5100), par Jean-Pierre Liégeois, jeune lecteur du Var (pour préserver un peu d’anonymat)

Dans mon stage, quand on avait des valeurs manquantes, on me disait de remplacer par -1, puis de rajouter une indicatrice comme quoi la variable vaut -1. Ça permet de ne supprimer ni variable, ni observations. On peut faire ça ?

Si je formalise un peu, on va simuler ici des données, disons x_1 et x_2, on génère ensuite des données suivant un modèle, de la forme y=\beta_0+\beta_1x_1+\beta_2x_2+\varepsilon. Une proportion \alpha de x_1 sera transformée en NA.  Ce que suggérais Jean-Pierre, c’est de remplacer les valeurs manquantes par -1, puis d’ajuster un modèle y=\beta_0+\beta_1x_1+\beta_{-1}\mathbf{1}(x_1=-1)+\beta_2x_2+\varepsilon. Côté code, c’est assez simple. Par défaut, la stratégie de R est de supprimer les valeurs manquantes. Si 50% des données de x_1 sont manquantes, la moitié des lignes sont supprimées

n=1000
x1=runif(n)
x2=runif(n)
e=rnorm(n,.2)
y=1+2*x1-x2+e
alpha=.05
indice=sample(1:n,size=round(n*alpha))
base=data.frame(y=y,x1=x1)
base$x1[indice]=NA
reg=lm(y~x1+x2,data=base)

Au lieu de générer un unique échantillon, on va en simuler 10,000, et regarder la distribution de \widehat{\beta}_1,

m=10000
B=rep(NA,m)
for(s in 1:m){
  x1=runif(n)
  x2=runif(n)
  e=rnorm(n,.2)
  y=1+2*x1-x2+e
  alpha=.5
  indice=sample(1:n,size=round(n*alpha))
  base=data.frame(y=y,x1=x1,x2=x2)
  base$x1[indice]=NA
  reg=lm(y~x1+x2,data=base)
  B[s]=coefficients(reg)[2]
}
hist(B,probability=TRUE,col=rgb(0,0,1,.4),border="white",xlab="missing values = 50%")
lines(density(B),lwd=2,col="blue")
abline(v=2,lty=2,col="red")

Bien sur, avec un taux de valeurs manquantes plus faibles – disons \alpha=5\% – on perd moins d’observations, et donc l’estimateur a une variance plus faible.

Tentons maintenant la stratégie consistant à remplacer les valeurs manquantes par des valeurs numériques fixes, et de rajouter une indicatrice,

B=rep(NA,m)
for(s in 1:m){
  x1=runif(n)
  x2=runif(n)
  e=rnorm(n,.2)
  y=1+2*x1-x2+e
  alpha=.5
  indice=sample(1:n,size=round(n*alpha))
  base=data.frame(y=y,x1=x1,x2=x2)
  base$x1[indice]=-1
  reg=lm(y~x1+x2+I(x1==(-1)),data=base)
  B[s]=coefficients(reg)[2]
}
hist(B,probability=TRUE,col=rgb(0,0,1,.4),border="white")
lines(density(B),lwd=2,col="blue")
abline(v=2,lty=2,col="red")

Ce qui ne change pas grand chose, on en conviendra…  y compris si le taux de valeurs manquantes passe à 5%,

On peut se demander ce qui se passe si le shift n’est plus de 1 mais de 10 (a priori, c’est arbitraire, la variable x_1  pouvant être plus ou moins dispersée… -1 pour une variable entre 0 et 1, ou entre 0 et 1000, ça n’est pas tout à fait pareil). Mais non, par exemple avec toujours 5% de valeurs manquantes, on a

Si on regarde notre échantillon, en particulier le nuage de points  (x_1,y), on observe

ici, les valeurs manquantes sont choisies au hasard, de manière totalement indépendante,

x1=runif(n)
x2=runif(n)
e=rnorm(n,.2)
y=1+2*x1-x2+e
alpha=.3333333
indice=sample(1:n,size=round(n*alpha))
clr=rep("black",n)
clr[indice]="red"
plot(x1,y,col=clr)

(ici avec 1/3 de valeurs manquantes, en rouge). Mais on pourrait supposer que les valeurs manquantes sont les plus grandes valeurs de x_1, par exemple,

x1=runif(n)
x2=runif(n)
e=rnorm(n,.2)
y=1+2*x1-x2+e
alpha=.3333333
indice=sample(1:n,size=round(n*alpha),prob = x1^3)
clr=rep("black",n)
clr[indice]="red"
plot(x1,y,col=clr)

On peut se demander ce que ça donnerait sur l’estimateur \widehat{\beta}_1

Ça ne change pas grand chose, mais on a plus de variance, si on regarde bien. Dernier essai : que se passe-t-il si les variables x_1 et x_2 sont maintenant corrélées,

B=rep(NA,m)
library(mnormt)
r=.8
S = matrix(c(1,r,r,1),2,2)
for(s in 1:m){
  x=rmnorm(n,varcov = S)
  x1=pnorm(x[,1])
  x2=pnorm(x[,2])
  e=rnorm(n,.2)
  y=1+2*x1-x2+e
  alpha=.2
  indice=sample(1:n,size=round(n*alpha),prob = x1^3)
  base=data.frame(y=y,x1=x1,x2=x2)
  base$x1[indice]=-1
  reg=lm(y~x1+x2+I(x1==(-1)),data=base)
  B[s]=coefficients(reg)[2]
}
hist(B,probability=TRUE,col=rgb(0,0,1,.4),border="white")
lines(density(B),lwd=2,col="blue")
abline(v=2,lty=2,col="red")

Cette fois, on a un estimateur biaisé (de l’ordre de 10% sur cet exemple numérique). Manifestement, cette technique n’est pas très concluante…

Je pourrais ajouter que cette méthode ne revient pas à la première, même si la distribution des estimateurs est proche

set.seed(1)
x=rmnorm(n,varcov = S)
x1=pnorm(x[,1])
x2=pnorm(x[,2])
e=rnorm(n,.2)
y=1+2*x1-x2+e
alpha=.2
indice=sample(1:n,size=round(n*alpha),prob = x1^3)
base=data.frame(y=y,x1=x1,x2=x2)
base$x1[indice]=-1
reg1=lm(y~x1+x2+I(x1==(-1)),data=base)
coefficients(reg1)
      (Intercept)                x1                x2 I(x1 == (-1))TRUE 
        1.0988005         1.7454385        -0.5149477         3.1000668 
base$x1[indice]=NA
reg2=lm(y~x1+x2,data=base)
coefficients(reg2)
(Intercept)          x1          x2 
  1.1123953   1.8612882  -0.6548206

Comme je le disais (lors de la discussion qui a suivi le cours) une méthode plus prometteuse est l’imputation. L’idée est de prédire une valeur pour les x_1 manquants. On pourrait être tenté de prendre  \overline{x}_1, la moyenne des x_1 observés. Mais on sait que les valeurs manquantes sont justement pour les grandes valeurs de x_1, ici, donc on doit pouvoir faire mieux ! On sait aussi que x_1  et x_2 sont corrélés ici. Corrélés positivement, en plus. Autrement dit, si x_2 est grand, on sait que le x_1 (non observé) devait être grand. Le plus simple est de faire un modèle linéaire, x_1=\alpha_0+\alpha_2x_2+\eta_i, calibré sur les valeurs non-manquantes. Puis on utilise \widehat{x}_1=\widehat{\alpha}_0+\widehat{\alpha}_2x_2 pour les valeurs manquantes. C’est simpliste, mais pourquoi pas ? On estime alors le modèle sur cette nouvelle base.

for(s in 1:m){
  x=rmnorm(n,varcov = S)
  x1=pnorm(x[,1])
  x2=pnorm(x[,2])
  e=rnorm(n,.2)
  y=1+2*x1-x2+e
  alpha=.2
  indice=sample(1:n,size=round(n*alpha),prob = x1^3)
  base=data.frame(y=y,x1=x1,x2=x2)
    base$x1[indice]=NA
    reg0=lm(x1~x2,data=base[-indice,])
    base$x1[indice]=predict(reg0,newdata=base[indice,])
  reg=lm(y~x1+x2,data=base)
  B[s]=coefficients(reg)[2]
}
hist(B,probability=TRUE,col=rgb(0,0,1,.4),border="white")
lines(density(B),lwd=2,col="blue")
abline(v=2,lty=2,col="red")

sur l’exemple numérique, on obtient

base$x1[indice]=NA
reg0=lm(x1~x2,data=base[-indice,])
base$x1[indice]=predict(reg0,newdata=base[indice,])
reg3=lm(y~x1+x2,data=base)
coefficients(reg3)
(Intercept)          x1          x2 
  1.1593298   1.8612882  -0.6320339

Cette méthode a au moins pu corriger du biais…

Après, si on regarde attentivement, on a exactement la même valeur qu’avec la première méthode qui consiste à supprimer les lignes avec des valeurs manquantes !

summary(reg3)
 
Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  1.15933    0.06649  17.435  < 2e-16 ***
x1           1.86129    0.21967   8.473  < 2e-16 ***
x2          -0.63203    0.20148  -3.137  0.00176 ** 
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 1.051 on 997 degrees of freedom
Multiple R-squared:  0.1094,	Adjusted R-squared:  0.1076 
F-statistic: 61.23 on 2 and 997 DF,  p-value: < 2.2e-16 
 
summary(reg2) 
 
Coefficients: Estimate Std. Error t value Pr(>|t|)    
(Intercept)  1.11240    0.06878  16.173  < 2e-16 ***
x1           1.86129    0.21666   8.591  < 2e-16 ***
x2          -0.65482    0.20820  -3.145  0.00172 ** 
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 1.037 on 797 degrees of freedom
  (200 observations deleted due to missingness)
Multiple R-squared:  0.1223,	Adjusted R-squared:   0.12 
F-statistic:  55.5 on 2 and 797 DF,  p-value: < 2.2e-16

Au lieu de faire une régression linéaire, on peut utiliser une autre méthode d’imputation. Par exemple prendre la moyenne des k valeurs de x_1 (observées) pour les k individus ayant des x_2 les plus proches du x_2 de l’individu ayant x_1 manquant :

kpp=function(i,basena,k=5){
  x2=basena$x2[i]
  sb=basena[!is.na(basena$x1),]
  idx=rank(abs(sb$x2-x2))
  mean(sb[which(idx<=k),"x1"])
}

Sur notre base simulée on obtient

base$x1[indice]=NA
base0=base
for(j in indice) base0$x1[j]=kpp(j,base0,k=5)
reg4=lm(y~x1+x2,data=base)
coefficients(reg4)
(Intercept)          x1          x2 
   1.197944    1.804220   -0.806766

Si on regarde ce que ça donne sur nos 10,000 simulations, on a (c’est un peu long, car j’ai codé ça très rapidement, et pas du tout de manière optimale)

for(s in 1:m){
  x=rmnorm(n,varcov = S)
  x1=pnorm(x[,1])
  x2=pnorm(x[,2])
  e=rnorm(n,.2)
  y=1+2*x1-x2+e
  alpha=.2
  indice=sample(1:n,size=round(n*alpha),prob = x1^3)
  base=data.frame(y=y,x1=x1,x2=x2)
  base$x1[indice]=NA
  base0=base
    for(j in indice) base0$x1[j]=kpp(j,base0,k=5)
  reg=lm(y~x1+x2,data=base0)
  B[s]=coefficients(reg)[2]
}
hist(B,probability=TRUE,col=rgb(0,0,1,.4),border="white")
lines(density(B),lwd=2,col="blue")
abline(v=2,lty=2,col="red")

Le biais semble ici plus faible que sans imputation… autrement dit, les méthodes d’imputation me semblent plus robustes que cette stratégie visant à remplacer des NA par une valeur arbitraire, et de rajouter une indicatrice dans la régression.