Following the course of this morning, I got a very interesting question from a student of mine. The question was about having non-significant components in a splineregression. Should we consider a model with a small number of knots and all components significant, or one with a (much) larger number of knots, and a lot of knots non-significant?
My initial intuition was to prefer the second alternative, like in autoregressive models in R. When we fit an AR(6) model, it’s not really a big deal if most coefficients are not significant (but the last one). It’s won’t affect much the forecast. So here, it might be the same. With a larger number of knots, we should be able to capture small bumps that we’ll never capture with a smaller number.
Here is what a have with a small number of knots, and cubic splines
and with a larger number of knots
In order to understand what’s going on, consider a simple model, with the two splines above, in red
> set.seed(1) > library(splines) > x=seq(0,1,by=.01) > v=bs(x,10) > x2=v[,2] > x10=v[,10] > set.seed(1) > y=1+3*x2+5*x10+rnorm(length(x))/4 > y_test=1+3*x2+5*x10+rnorm(length(x))/4
Note that here I have generated two sets of data, one to train a model, and one to test it. Here, the data looks like that
> plot(x,y)
It is based on two splines,
> lines(df$x,1+3*x2+5*x10)
If we use a spline model with 10 degrees of freedom, we get
> df=data.frame(x,y) > reg=lm(y~bs(x,10),data=df) > summary(reg) Coefficients: Estimate Std. Er t value Pr(>|t|) (Intercept) 0.91671 0.17068 5.371 6.08e-07 *** bs(x, 10)1 0.20485 0.32696 0.627 0.533 bs(x, 10)2 3.15593 0.22534 14.005 < 2e-16 *** bs(x, 10)3 0.04847 0.25075 0.193 0.847 bs(x, 10)4 0.09373 0.21597 0.434 0.665 bs(x, 10)5 0.11624 0.22939 0.507 0.614 bs(x, 10)6 0.24829 0.22293 1.114 0.268 bs(x, 10)7 -0.06825 0.23498 -0.290 0.772 bs(x, 10)8 0.19633 0.26241 0.748 0.456 bs(x, 10)9 0.27557 0.26976 1.022 0.310 bs(x, 10)10 4.78134 0.24116 19.826 < 2e-16 ***
which makes sense, from what we have generated. Indeed, most of the components are not significant, but the second and the tenth. We can actually test that all those components are null (at the same time)
> A=matrix(0,8,11) > colnames(A)=names(coefficients(reg)) > A[1,2]=A[2,4]=A[3,5]=A[4,6]=A[5,7]= + A[6,8]=A[7,9]=A[8,10]=1 > b=rep(0,8) > linearHypothesis(reg, A,b) Linear hypothesis test Hypothesis: bs(x, 10)1 = 0 bs(x, 10)3 = 0 bs(x, 10)4 = 0 bs(x, 10)5 = 0 bs(x, 10)6 = 0 bs(x, 10)7 = 0 bs(x, 10)8 = 0 bs(x, 10)9 = 0 Model 1: restricted model Model 2: y ~ bs(x, 10) Res.Df RSS Df Sum of Sq F Pr(>F) 1 98 4.8766 2 90 4.6196 8 0.25701 0.6259 0.754
and yes, those coefficients are not significant.
> yp10=predict(reg) > lines(df$x,yp10,col="red")
Continue reading Regression with Splines: Should we care about Non-Significant Components?