Variance of the slope in a regression model

In my “applied linear models” exam, there was a tricky question (it was a multiple choice, so no details were asked). I was simply asking if the following statement was valid, or not

Consider a linear regression with one single covariate, y=\beta_0+\beta_1x_1+\varepsilon and the least-square estimates. The variance of the slope is \text{Var}[\widehat{\beta}_1] Do we decrease this variance if we add one variable, and consider y=\beta_0+\beta_1x_1+\beta_2x_2+\varepsilon ?

For the exam, the expected answer was simply “no”. In a nutshell, there are two cases where we should expect different changes,

  • if x_1 and x_2 are highly correlated, then we should expect the variance to increase
  • if x_1 and x_2 are not correlated, then we should expect the variance to decrease

We did briefly observed (and discussed) those points on examples during the lecture… but I wanted to go a bit further, since I couldn’t find any analytical results. Let us generate a model y=\beta_0+\beta_1x_1+\beta_2x_2+\varepsilon, and then compare the variance \text{Var}[\widehat{\beta}_1] on the two fitted modes, depending on the correlation between x_1 and x_2

library(mnormt)
n=200
s=function(r=0){
S=matrix(c(1,r,r,1),2,2)
X=rmnorm(n,c(0,0),S)
B=data.frame(y=-2+X[,1]+X[,2]+rnorm(n)/2,
x1=X[,1],
x2=X[,2])
reg12=lm(y~x1+x2,data=B)
reg1=lm(y~x1,data=B)
k=summary(reg12)$coefficients[2,2]/summary(reg1)$coefficients[2,2]
k}

Let us generate 500 samples for each value of the correlation, from -0.9 to _0.9

M=NULL
for(r in ((-9):9)/10) M=cbind(M,Vectorize(s)(rep(r,500)))

and let us plot the ratio of the two variances

plot(0:1,0:1,xlim=c(-1,1),ylim=c(0,2),col="white")
for(i in 1:19) points(rep((((-9):9)/10)[i],500),M[,i],col="light blue")
VM=apply(M,2,mean)
lines((((-9):9)/10),VM,col="red",lwd=2)
abline(h=1,lty=2)

If the ratio exceeds 1, the variance increases when adding a covariate.

Indeed, here, when the two variables are independent, the variance is divided by two. But when covariates are highly correlated, the variance is multiplied by two…

Now, what if, actually, x_2 is not a real explanatory variable : the true model we generate is y=\beta_0+\beta_1x_1+\varepsilon. In that case,

s=function(r=0){
S=matrix(c(1,r,r,1),2,2)
X=rmnorm(n,c(0,0),S)
B=data.frame(y=-2+X[,1]+rnorm(n)/2,
x1=X[,1],
x2=X[,2])
reg12=lm(y~x1+x2,data=B)
reg1=lm(y~x1,data=B)
k=summary(reg12)$coefficients[2,2]/summary(reg1)$coefficients[2,2]
k}

we get our samples as previously

M=NULL
for(r in ((-9):9)/10) M=cbind(M,Vectorize(s)(rep(r,500)))

and we plot those ratios

plot(0:1,0:1,xlim=c(-1,1),ylim=c(0,2),col="white")
for(i in 1:19) points(rep((((-9):9)/10)[i],500),M[,i],col="light blue")
VM=apply(M,2,mean)
lines((((-9):9)/10),VM,col="red",lwd=2)
abline(h=1,lty=2)

In the case we add a useless variable x_2, whatever the correlation with x_1, it will always, on average, increase the variance of \widehat{\beta}_1.


OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (March 12, 2019). Variance of the slope in a regression model. Freakonometrics. Retrieved December 3, 2024 from https://doi.org/10.58079/ovd0


One thought on “Variance of the slope in a regression model”

  1. This post kind of buried the lede in the last sentence. To summarize for other readers: Adding a “useless” predictor *or* one highly correlated with an existing predictor will increase the variance of a calculated model coefficient.

    Overall this seems intuitive: we’d expect that adding a useless predictor should make the model crappier. Or at least that’s the way I see it.

    It would be interesting to know if this were always (on average) true for all models and all coefficients.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.