Most of our intuitions about regression models come from the Gaussian standard linear model. One interesting feature is that, when we have a factor explanatory variable, the sum of predictions per class is the sum of observations of the endogeneous variable, per class. To be more specific, consider some factor variable , and a regression model
Use ordinary least squares to fit that model
Then for all
> n=200
> X1=rep(0:1,each=n/2)
> set.seed(1)
> X2=runif(2*n)
> L=X1-X2
> B=data.frame(Y=rnorm(n,L),X1=as.factor(X1),X2=X2)
> pd=aggregate(x=B$Y,by=list(B$X1),mean)$x
> pd
[1] -0.4881735 0.5341301
> fit=lm(Y~X1+X2,data=B)
> B2=data.frame(x=B$X1,y=predict(fit))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] -0.4881735 0.5341301
Actually, this result is still wherever the data come from. For instance, if we consider some Poisson based observations,
> L=exp(X1-X2)
> B=data.frame(Y=rpois(n,L),X1=as.factor(X1),X2=X2)
> pd=aggregate(x=B$Y,by=list(B$X1),mean)$x
> pd
[1] 0.68 1.80
> fit=lm(Y~X1+X2,data=B)
> B2=data.frame(x=B$X1,y=predict(fit))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] 0.68 1.80
This result is still valid here. But this is not the case with Generalized Linear Models. I mean, not everytime. The thing is that it is stil valid with a logistic regression,
> B=data.frame(Y=(rpois(n,L)>1)*1,X1=as.factor(X1),X2=X2)
> pd=aggregate(x=B$Y,by=list(B$X1),mean)$x
> pd
[1] 0.13 0.38
> fit=glm(Y~X1+X2,data=B,family=binomial(link="logit"))
> B2=data.frame(x=B$X1,y=predict(fit,type="response"))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] 0.13 0.38
But if we move away from the logistic regression, this result is no longer valid. The probit regression for instance
> fit=glm(Y~X1+X2,family=binomial(link="probit"),data=B)
> B2=data.frame(x=B$X1,y=predict(fit,type="response"))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] 0.1301143 0.3799545
or a cloglog link function
> fit=glm(Y~X1+X2,family=binomial(link="cloglog"),data=B)
> B2=data.frame(x=B$X1,y=predict(fit,type="response"))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] 0.1299614 0.3800653
or a log link function
> fit=glm(Y~X1+X2,family=binomial(link="log"),data=B)
> B2=data.frame(x=B$X1,y=predict(fit,type="response"))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] 0.1299618 0.3800345
Actually, we can try any kind of power link function
> fit=glm(Y~X1+X2,family=binomial(link=power(.5)),data=B)
> B2=data.frame(x=B$X1,y=predict(fit,type="response"))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] 0.1301992 0.3800619
> fit=glm(Y~X1+X2,family=binomial(link=power(.8)),data=B)
> B2=data.frame(x=B$X1,y=predict(fit,type="response"))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] 0.1306485 0.3798376
Even if we will soon see a similar property with the Poisson regression,
> fit=glm(Y~X1+X2,family=poisson(link="log"),data=B)
> B2=data.frame(x=B$X1,y=predict(fit,type="response"))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] 0.13 0.38
in general, for any
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (November 7, 2016). Regression on factors. Freakonometrics. Retrieved December 5, 2024 from https://doi.org/10.58079/ov5e
One thought on “Regression on factors”