Residuals from a logistic regression

I always claim that graphs are important in econometrics and statistics ! Of course, it is usually not that simple. Let me come back to a recent experience. A got an email from Sami yesterday, sending me a graph of residuals, and asking me what could be done with a graph of residuals, obtained from a logistic regression ? To get a better understanding, let us consider the following dataset (those are simulated data, but let us assume – as in practice – that we do not know the true model, this is why I decided to embed the code in some R source file)

> source("http://freakonometrics.free.fr/probit.R")
> reg=glm(Y~X1+X2,family=binomial)

If we use R’s diagnostic plot, the first one is the scatterplot of the residuals, against predicted values (the score actually)

> plot(reg,which=1)

we is simply

> plot(predict(reg),residuals(reg))
> abline(h=0,lty=2,col="grey")

Why do we have those two lines of points ? Because we predict a probability for a variable taking values 0 or 1. If the tree value is 0, then we always predict more, and residuals have to be negative (the blue points) and if the true value is 1, then we underestimate, and residuals have to be positive (the red points). And of course, there is a monotone relationship… We can see more clearly what’s going on when we use colors

> plot(predict(reg),residuals(reg),col=c("blue","red")[1+Y])
> abline(h=0,lty=2,col="grey")

Points are exactly on a smooth curve, as a function of the predicted value,

Now, there is nothing from this graph. If we want to understand, we have to run a local regression, to see what’s going on,

> lines(lowess(predict(reg),residuals(reg)),col="black",lwd=2)

This is exactly what we have with the first function. But with this local regression, we do not get confidence interval. Can’t we pretend, on the graph about that the plain dark line is very close to the dotted line ?

> rl=lm(residuals(reg)~bs(predict(reg),8))
> #rl=loess(residuals(reg)~predict(reg))
> y=predict(rl,se=TRUE)
> segments(predict(reg),y$fit+2*y$se.fit,predict(reg),y$fit-2*y$se.fit,col="green")

Yes, we can.And even if we have a guess that something can be done, what would this graph suggest ?

Actually, that graph is probably not the only way to look at the residuals. What not plotting them against the two explanatory variables ? For instance, if we plot the residuals against the second one, we get

> plot(X2,residuals(reg),col=c("blue","red")[1+Y])
> lines(lowess(X2,residuals(reg)),col="black",lwd=2)
> lines(lowess(X2[Y==0],residuals(reg)[Y==0]),col="blue")
> lines(lowess(X2[Y==1],residuals(reg)[Y==1]),col="red")
> abline(h=0,lty=2,col="grey")

The graph is similar to the one we had earlier, and against, there is not much to say,

If we now look at the relationship with the first one, it starts to be more interesting,

> plot(X1,residuals(reg),col=c("blue","red")[1+Y])
> lines(lowess(X1,residuals(reg)),col="black",lwd=2)
> lines(lowess(X1[Y==0],residuals(reg)[Y==0]),col="blue")
> lines(lowess(X1[Y==1],residuals(reg)[Y==1]),col="red")
> abline(h=0,lty=2,col="grey")

since we can clearly identify a quadratic effect. This graph suggests that we should run a regression on the square of the first variable. And it can be seen as a significant effect,

Now, if we run a regression including this quadratic effect, what do we have,

> reg=glm(Y~X1+I(X1^2)+X2,family=binomial)
> plot(predict(reg),residuals(reg),col=c("blue","red")[1+Y])
> lines(lowess(predict(reg)[Y==0],residuals(reg)[Y==0]),col="blue")
> lines(lowess(predict(reg)[Y==1],residuals(reg)[Y==1]),col="red")
> lines(lowess(predict(reg),residuals(reg)),col="black",lwd=2)
> abline(h=0,lty=2,col="grey")

Actually, it looks like we back where we were initially…. So what is my point ? my point is that

  • graphs (yes, plural) can be used to see what might go wrong, to get more intuition about possible non linear transformation
  • graphs are not everything, and they never be perfect ! Here, in theory, the plain line should be a straight line, horizontal. But we also want a model as simple as possible. So, at some stage, we should probably give up, and rely on statistical tests, and confidence intervals. Yes, an almost flat line can be interpreted as flat.

OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (August 23, 2013). Residuals from a logistic regression. Freakonometrics. Retrieved September 16, 2024 from https://doi.org/10.58079/ouqz


12 thoughts on “Residuals from a logistic regression”

  1. Very nice post, thank you! I was toying around with it and have a fun suggestion for your regression with the quadratic term of X1:

    I know it doesn’t make a difference in terms of the plots and this is all about plots, but I think you could improve your quadratic model by using poly(X1, 2) instead of directly including X1+I(X1^2) to obtain orthogonal terms for the polynomial:

    reg <- glm(Y ~ poly(X1, 2) + X2, family=binomial)

  2. Hi,
    I need help understanding the Residual vs Actuals in relation to the Residual vs Fit plot. (I feel that one can be obtained from the other, but not clear how).
    I am using the equation e = y -yhat,
    where e=residual,y=actual, yhat=fit (i.e. predicted)

    I have Tobit model with ‘y’ censored to lie between [0,1].
    The Residual vs Actual plot is roughly an upward trending line- Residuals are on the Y-axis and Actuals on the X-axis.
    Here is a rough table of the data:
    For a fixed value of y, say:
    (1.) y=0,the band of residuals is between -0.25 and -0.1
    (2.) y=0.2, the band of residuals is between -0.4 and 0
    (3.) y=0.4, the band of residuals varies from -0.3 to 0.1
    (4.) y=0.6, the band of residuals varies from -0.2 to 0.4 and
    ( 4.) y=1, the band of residuals varies from 0.25 to 0.75.

    How can I tell that the Residuals vs Fit will be a downward sloping line (confirmed by SAS)? I think I should be able to just by looking. Note: The Fit values lie in about (0.05,0.75).

    Thanks for any help!

  3. Hello,

    I could not understand – in the graphs where Residuals is plotted against variables. Why residuals are in the interval of aprox (-3; 3)? Shouldn’t it be in the interval (-1;1)? Let’s take the biggest residual – let’s say that the true value is 0 but predicted value is 1. So the residual would be -1. From where such big residuals as -3 or 3 appear? Please explain. thank you in advance.

  4. The first graph should show “predicted” vs.”residuals”. “Predicted” should be probability of success? Right? But on the x axes are values from approx. -3 to 3. So in fact there are “predicted logits”? Or what? Thanks, Maria

  5. I had likewise been baffled by what to do with residual plots from logistic regression. Thank you for sharing your thoughts. (I like the idea of putting a lowess curve on the residual plot.)

    Two comments:

    1. I discovered that “bs” is in library “splines”, which would have been good to say. (Je vois qu’au premier commentaire vous l’a dit en francais).

    2. I think it would nice if those green error bars went up and down from the lowess curve rather than from some mysterious (and rather more wiggly) other curve. You could save the results from lowess into a variable to use for this. I think your standard errors would be about as defensible used this way.

    1. It’s only the fourth post on this blog where I use this bs() function (see http://freakonometrics.hypotheses.org/?s=bs%28&x=-1389&y=-229). I now load it when I open R, so unfortunately, I forgot to mention it. But it is not the first case, and not the last one ! actually, I know that some codes I upload, from my teaching, do contain errors (in the sens that some packages are not mentioned, or some constants are not defined). But I think it is a way to learn. I want my student to learn (by themselves) how to fix problems.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.