Last week in the non-life insurance course, we’ve seen the theory of the Generalized Linear Models, emphasizing the two important components
- the link function (which is actually the key component in predictive modeling)
- the distribution, or the variance function
Just to illustrate, consider my favorite dataset
lin.mod = lm(dist~speed,data=cars)
A linear model means here Y_i=\beta_0+\beta_1X_i+\varepsilon_i
where the residuals are assumed to be centered, independent, and with identical variance. If we visualize that linear regression, we usually see something like that
The idea here (in GLMs) is to assume Y\vertX=x\sim\mathcal{N}(\beta_0+\beta_1x,\sigma^2)
which will produce the same model as the one describe previously, based on some error term. That model can be visualized below,
attach(cars) n=2 X= cars$speed Y=cars$dist df=data.frame(X,Y) vX=seq(min(X)-2,max(X)+2,length=n) vY=seq(min(Y)-15,max(Y)+15,length=n) mat=persp(vX,vY,matrix(0,n,n),zlim=c(0,.1),theta=-30,ticktype ="detailed", box = FALSE) reggig=glm(Y~X,data=df,family=gaussian(link="identity")) x=seq(min(X),max(X),length=501) C=trans3d(x,predict(reggig,newdata=data.frame(X=x),type="response"),rep(0,length(x)),mat) lines(C,lwd=2) sdgig=sqrt(summary(reggig)$dispersion) x=seq(min(X),max(X),length=501) y1=qnorm(.95,predict(reggig,newdata=data.frame(X=x),type="response"), sdgig) C=trans3d(x,y1,rep(0,length(x)),mat) lines(C,lty=2) y2=qnorm(.05,predict(reggig,newdata=data.frame(X=x),type="response"), sdgig) C=trans3d(x,y2,rep(0,length(x)),mat) lines(C,lty=2) C=trans3d(c(x,rev(x)),c(y1,rev(y2)),rep(0,2*length(x)),mat) polygon(C,border=NA,col="yellow") C=trans3d(X,Y,rep(0,length(X)),mat) points(C,pch=19,col="red") n=8 vX=seq(min(X),max(X),length=n) mgig=predict(reggig,newdata=data.frame(X=vX)) sdgig=sqrt(summary(reggig)$dispersion) for(j in n:1){ stp=251 x=rep(vX[j],stp) y=seq(min(min(Y)-15,qnorm(.05,predict(reggig,newdata=data.frame(X=vX[j]),type="response"), sdgig)),max(Y)+15,length=stp) z0=rep(0,stp) z=dnorm(y, mgig[j], sdgig) C=trans3d(c(x,x),c(y,rev(y)),c(z,z0),mat) polygon(C,border=NA,col="light blue",density=40) C=trans3d(x,y,z0,mat) lines(C,lty=2) C=trans3d(x,y,z,mat) lines(C,col="blue")}
We do have two parts here: the linear increase of the average, \mathbb{E}(Y\vert X=x)=\beta_0+\beta_1x and the constant variance of the normal distribution \text{Var}(Y\vert X=x)=\sigma^2.
On the other hand, if we assume a Poisson regression,
poisson.reg = glm(dist~speed,data=cars,family=poisson(link="log"))
we have something like
This time, two things have changed simultaneously: our model is no longer linear, it is an exponential one \mathbb{E}(Y\vert X=x)=e^{\beta_0+\beta_1x}, and the variance is also increasing with the explanatory variable \text{Var}(Y\vert X=x)=e^{\beta_0+\beta_1x}, since with a Poisson regression,
Y\vert X=x\sim\mathcal{P}(e^{\beta_0+\beta_1x})
If we adapt the previous code, we get
The problem is that we changed two things when we introduced the Poisson regression from the linear model. So let us look at what happens when we change the two components independently. First, we can change the link function, with a Gaussian model but this time a multiplicative model (with a logarithm link function)
gaussian.reg = glm(dist~speed,data=cars,family=gaussian(link="log"))
which is still, here, an homoscedasctic model, but this time non-linear. Or we can change the link function in the Poisson regression, to get a linear model, but heteroscedastic
poisson.lin = glm(dist~speed,data=cars,family=poisson(link="identity"))
So this is basically what GLMs are about….
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (October 22, 2013). GLM, non-linearity and heteroscedasticity. Freakonometrics. Retrieved September 16, 2024 from https://doi.org/10.58079/oury
Hi, I cannot see the equations written in the article, could you please update the article?
thank you so much
Luca
Very nice ! Thank you.
Really great post, love the visualisations.
Very Nice Diagram and Explanation
Hi,
Great Post. Is there a way to generate similar graphs via ggplot()
Regards /
good question…. I am very old school. I mean, I loved the ideas in the book on Grammar of Graphics, but I still think that it is difficult to use the ggplot command… so I still use the old style of graphs. You can find animated versions of the graphs in more recent slides http://freakonometrics.free.fr/SoA-webinar.pdf
According to http://www.statmethods.net/advgraphs/ggplot2.html ggplot does not support 3d. But lattice does via wireframe.
Great post… I really learned something today.