Regression on factors

Most of our intuitions about regression models come from the Gaussian standard linear model. One interesting feature is that, when we have a factor explanatory variable, the sum of predictions per class is the sum of observations of the endogeneous variable, per class. To be more specific, consider some factor variable https://latex.codecogs.com/gif.latex?x_1\in\{0,1\}, and a regression model

https://latex.codecogs.com/gif.latex?y_i=\beta_0+\beta_1%20\boldsymbol{1}(x_1=1)+\beta_2%20x_2+\varepsilon_i

Use ordinary least squares to fit that model

https://latex.codecogs.com/gif.latex?\widehat{y}_i=\widehat{\beta}_0+\widehat{\beta}_1%20\boldsymbol{1}(x_1=1)+\widehat{\beta}_2%20x_2

Then for all https://latex.codecogs.com/gif.latex?x\in\{0,1\}

https://latex.codecogs.com/gif.latex?\sum_{i:x_i=x}%20y_i%20=%20\sum_{i:x_i=x}%20\widehat{y}_i

> n=200
> X1=rep(0:1,each=n/2)
> set.seed(1)
> X2=runif(2*n)
> L=X1-X2
> B=data.frame(Y=rnorm(n,L),X1=as.factor(X1),X2=X2)
> pd=aggregate(x=B$Y,by=list(B$X1),mean)$x
> pd
[1] -0.4881735  0.5341301
> fit=lm(Y~X1+X2,data=B)
> B2=data.frame(x=B$X1,y=predict(fit))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] -0.4881735  0.5341301

Actually, this result is still wherever the data come from. For instance, if we consider some Poisson based observations,

> L=exp(X1-X2)
> B=data.frame(Y=rpois(n,L),X1=as.factor(X1),X2=X2)
> pd=aggregate(x=B$Y,by=list(B$X1),mean)$x
> pd
[1] 0.68 1.80
> fit=lm(Y~X1+X2,data=B)
> B2=data.frame(x=B$X1,y=predict(fit))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] 0.68 1.80

This result is still valid here. But this is not the case with Generalized Linear Models. I mean, not everytime. The thing is that it is stil valid with a logistic regression,

> B=data.frame(Y=(rpois(n,L)>1)*1,X1=as.factor(X1),X2=X2)
> pd=aggregate(x=B$Y,by=list(B$X1),mean)$x
> pd
[1] 0.13 0.38
> fit=glm(Y~X1+X2,data=B,family=binomial(link="logit"))
> B2=data.frame(x=B$X1,y=predict(fit,type="response"))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] 0.13 0.38

But if we move away from the logistic regression, this result is no longer valid. The probit regression for instance

> fit=glm(Y~X1+X2,family=binomial(link="probit"),data=B)
> B2=data.frame(x=B$X1,y=predict(fit,type="response"))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] 0.1301143 0.3799545

or a cloglog link function

> fit=glm(Y~X1+X2,family=binomial(link="cloglog"),data=B)
> B2=data.frame(x=B$X1,y=predict(fit,type="response"))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] 0.1299614 0.3800653

or a log link function

> fit=glm(Y~X1+X2,family=binomial(link="log"),data=B)
> B2=data.frame(x=B$X1,y=predict(fit,type="response"))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] 0.1299618 0.3800345

Actually, we can try any kind of power link function

> fit=glm(Y~X1+X2,family=binomial(link=power(.5)),data=B)
> B2=data.frame(x=B$X1,y=predict(fit,type="response"))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] 0.1301992 0.3800619

> fit=glm(Y~X1+X2,family=binomial(link=power(.8)),data=B)
> B2=data.frame(x=B$X1,y=predict(fit,type="response"))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] 0.1306485 0.3798376

Even if we will soon see a similar property with the Poisson regression,

> fit=glm(Y~X1+X2,family=poisson(link="log"),data=B)
> B2=data.frame(x=B$X1,y=predict(fit,type="response"))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] 0.13 0.38

in general, for any https://latex.codecogs.com/gif.latex?x\in\{0,1\}

https://latex.codecogs.com/gif.latex?\sum_{i:x_i=x}%20y_i%20\neq%20\sum_{i:x_i=x}%20\widehat{y}_i


OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (November 7, 2016). Regression on factors. Freakonometrics. Retrieved December 5, 2024 from https://doi.org/10.58079/ov5e


One thought on “Regression on factors”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.