Some heuristics about spline smoothing

Let us continue our discussion on smoothing techniques in regression. Assume that . where is some unkown function, but assumed to be sufficently smooth. For instance, assume that  is continuous, that exists, and is continuous, that  exists and is also continuous, etc. If  is smooth enough, Taylor’s expansion can be used. Hence, for https://latex.codecogs.com/gif.latex?x\in(\alpha,\beta)

which can also be writen as

for some https://latex.codecogs.com/gif.latex?a_k‘s. The first part is simply a polynomial.

The second part, is some integral. Using Riemann integral, observe that

for some https://latex.codecogs.com/gif.latex?b_i‘s, and some

Thus,

Nice! We have our linear regression model. A natural idea is then to consider a regression of https://latex.codecogs.com/gif.latex?Y on https://latex.codecogs.com/gif.latex?\boldsymbol{X} where

https://latex.codecogs.com/gif.latex?\boldsymbol{X}%20=%20(1,X,X^2,\cdots,X^d,(X-x_1)_+^d,\cdots,(X-x_k)_+^d%20)

given some knots https://latex.codecogs.com/gif.latex?\{x_1,\cdots,x_k\}. To make things easier to understand, let us work with our previous dataset,

plot(db)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_146.png

If we consider one knot, and an expansion of order 1,

attach(db)
library(splines)
B=bs(xr,knots=c(3),Boundary.knots=c(0,10),degre=1)
reg=lm(yr~B)
lines(xr[xr<=3],predict(reg)[xr<=3],col="red")
lines(xr[xr>=3],predict(reg)[xr>=3],col="blue")

The prediction obtained with this spline can be compared with regressions on subsets (the doted lines)

reg=lm(yr~xr,subset=xr<=3)
lines(xr[xr<=3],predict(reg)[xr<=3],col="red",lty=2)
reg=lm(yr~xr,subset=xr>=3)
lines(xr[xr>=3],predict(reg),col="blue",lty=2)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_160.png

It is different, since we have here three parameters (and not four, as for the regressions on the two subsets). One degree of freedom is lost, when asking for a continuous model. Observe that it is possible to write, equivalently

reg=lm(yr~bs(xr,knots=c(3),Boundary.knots=c(0,10),degre=1),data=db)

So, what happened here?

B=bs(xr,knots=c(2,5),Boundary.knots=c(0,10),degre=1)
matplot(xr,B,type="l")
abline(v=c(0,2,5,10),lty=2)

Here, the functions that appear in the regression are the following

http://freakonometrics.hypotheses.org/files/2013/10/Selection_161.png

Now, if we run the regression on those two components, we get

B=bs(xr,knots=c(2,5),Boundary.knots=c(0,10),degre=1)
matplot(xr,B,type="l")
abline(v=c(0,2,5,10),lty=2)

If we add one knot, we get

http://freakonometrics.hypotheses.org/files/2013/10/Selection_162.png

the prediction is

reg=lm(yr~B)
lines(xr,predict(reg),col="red")

http://freakonometrics.hypotheses.org/files/2013/10/Selection_147.png

Of course, we can choose much more knots,

B=bs(xr,knots=1:9,Boundary.knots=c(0,10),degre=1)
reg=lm(yr~B)
lines(xr,predict(reg),col="red")

http://freakonometrics.hypotheses.org/files/2013/10/Selection_148.png

We can even get a confidence interval

reg=lm(yr~B)
P=predict(reg,interval="confidence")
plot(db,col="white")
polygon(c(xr,rev(xr)),c(P[,2],rev(P[,3])),col="light blue",border=NA)
points(db)
reg=lm(yr~B)
lines(xr,P[,1],col="red")
abline(v=c(0,2,5,10),lty=2)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_149.png

And if we keep the  two knots we chose previously, but consider Taylor’s expansion of order 2, we get

B=bs(xr,knots=c(2,5),Boundary.knots=c(0,10),degre=2)
matplot(xr,B,type="l")
abline(v=c(0,2,5,10),lty=2)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_163.png

So, what’s going on? If we consider the constant, and the first component of the spline based matrix, we get

k=2
plot(db)
B=cbind(1,B)
lines(xr,B[,1:k]%*%coefficients(reg)[1:k],col=k-1,lty=k-1)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_164.png

If we add the constant term, the first term and the second term, we get the part on the left, before the first knot,

k=3
lines(xr,B[,1:k]%*%coefficients(reg)[1:k],col=k-1,lty=k-1)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_165.png

and with three terms from the spline based matrix, we can get the part between the two knots,

k=4
lines(xr,B[,1:k]%*%coefficients(reg)[1:k],col=k-1,lty=k-1)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_166.png

and finallty, when we sum all the terms, we get this time the part on the right, after the last knot,

k=5
lines(xr,B[,1:k]%*%coefficients(reg)[1:k],col=k-1,lty=k-1)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_167.png

This is what we get using a spline regression, quadratic, with two (fixed) knots. And can can even get confidence intervals, as before

reg=lm(yr~B)
P=predict(reg,interval="confidence")
plot(db,col="white")
polygon(c(xr,rev(xr)),c(P[,2],rev(P[,3])),col="light blue",border=NA)
points(db)
reg=lm(yr~B)
lines(xr,P[,1],col="red")
abline(v=c(0,2,5,10),lty=2)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_168.png

The great idea here is to use functions https://latex.codecogs.com/gif.latex?(x-x_i)_+, that will insure continuity at point https://latex.codecogs.com/gif.latex?x_i.

Of course, we can use those splines on our Dexter application,

http://freakonometrics.hypotheses.org/files/2013/10/Selection_170.png

Here again, using linear spline function, it is possible to impose a continuity constraint,

plot(data$no,data$mu,ylim=c(6,10))
abline(v=12*(0:8)+.5,lty=2)
reg=lm(mu~bs(no,knots=c(12*(1:7)+.5),Boundary.knots=c(0,97),
degre=1),data=db)
lines(c(1:94,96),predict(reg),col="red")

http://freakonometrics.hypotheses.org/files/2013/10/Selection_172.png

But we can also consider some quadratic splines,

plot(data$no,data$mu,ylim=c(6,10))
abline(v=12*(0:8)+.5,lty=2)
reg=lm(mu~bs(no,knots=c(12*(1:7)+.5),Boundary.knots=c(0,97),
degre=2),data=db)
lines(c(1:94,96),predict(reg),col="red")

http://freakonometrics.hypotheses.org/files/2013/10/Selection_171.png

Some heuristics about local regression and kernel smoothing

In a standard linear model, we assume that . Alternatives can be considered, when the linear assumption is too strong.

  • Polynomial regression

A natural extension might be to assume some polynomial function,

Again, in the standard linear model approach (with a conditional normal distribution using the GLM terminology), parameters can be obtained using least squares, where a regression of  on  is considered.

Even if this polynomial model is not the real one, it might still be a good approximation for . Actually, from Stone-Weierstrass theorem, if  is continuous on some interval, then there is a uniform approximation of  by polynomial functions.

Just to illustrate, consider the following (simulated) dataset

set.seed(1)
n=10
xr = seq(0,n,by=.1)
yr = sin(xr/2)+rnorm(length(xr))/2
db = data.frame(x=xr,y=yr)
plot(db)

with the standard regression line

reg = lm(y ~ x,data=db)
abline(reg,col="red")

Consider some polynomial regression. If the degree of the polynomial function is large enough, any kind of pattern can be obtained,

reg=lm(y~poly(x,5),data=db)

But if the degree is too large, then too many ‘oscillations’ are obtained,

reg=lm(y~poly(x,25),data=db)

and the estimation might be be seen as no longer robust: if we change one point, there might be important (local) changes

plot(db)
attach(db)
lines(xr,predict(reg),col="red",lty=2)
yrm=yr;yrm[31]=yr[31]-2 
regm=lm(yrm~poly(xr,25)) 
lines(xr,predict(regm),col="red")
  • Local regression

Actually, if our interest is to have locally a good approximation of  , why not use a local regression?

This can be done easily using a weighted regression, where, in the least square formulation, we consider

(it is possible to consider weights in the GLM framework, but let’s keep that for another post). Two comments here:

  • here I consider a linear model, but any polynomial model can be considered. Even a constant one. In that case, the optimization problem is

which can be solve explicitly, since

  • so far, nothing was mentioned about the weights. The idea is simple, here: if you can a good prediction at point , then  should be proportional to some distance between  and : if  is too far from , then it should not have to much influence on the prediction.

For instance, if we want to have a prediction at some point , consider . With this model, we remove observations too far away,

Actually, here, it is the same as

reg=lm(yr~xr,subset=which(abs(xr-x0)<1)

A more general idea is to consider some kernel function  that gives the shape of the weight function, and some bandwidth (usually denoted h) that gives the length of the neighborhood, so that

This is actually the so-called Nadaraya-Watson estimator of function .
In the previous case, we did consider a uniform kernel , with bandwith ,

But using this weight function, with a strong discontinuity may not be the best idea… Why not a Gaussian kernel,

This can be done using

fitloc0 = function(x0){
w=dnorm((xr-x0))
reg=lm(y~1,data=db,weights=w)
return(predict(reg,newdata=data.frame(x=x0)))}

On our dataset, we can plot

ul=seq(0,10,by=.01)
vl0=Vectorize(fitloc0)(ul)
u0=seq(-2,7,by=.01)
linearlocalconst=function(x0){
w=dnorm((xr-x0))
plot(db,cex=abs(w)*4)
lines(ul,vl0,col="red")
axis(3)
axis(2)
reg=lm(y~1,data=db,weights=w)
u=seq(0,10,by=.02)
v=predict(reg,newdata=data.frame(x=u))
lines(u,v,col="red",lwd=2)
abline(v=c(0,x0,10),lty=2)
}
linearlocalconst(2)

Here, we want a local regression at point 2. The horizonal line below is the regression (the size of the point is proportional to the wieght). The curve, in red, is the evolution of the local regression

Let us use an animation to visualize the construction of the curve. One can use

library(animate)

but for some reasons, I cannot install the package easily on Linux. And it is not a big deal. We can still use a loop to generate some graphs

vx0=seq(1,9,by=.1)
vx0=c(vx0,rev(vx0))
graphloc=function(i){
name=paste("local-reg-",100+i,".png",sep="")
png(name,600,400)
linearlocalconst(vx0[i])
dev.off()}

for(i in 1:length(vx0)) graphloc(i)

and then, in a terminal, I simply use

    convert -delay 25 /home/freak/local-reg-1*.png /home/freak/local-reg.gif

Of course, it is possible to consider a linear model, locally,

fitloc1 = function(x0){
w=dnorm((xr-x0))
reg=lm(y~poly(x,degree=1),data=db,weights=w)
return(predict(reg,newdata=data.frame(x=x0)))}

or even a quadratic (local) regression,

fitloc2 = function(x0){
w=dnorm((xr-x0))
reg=lm(y~poly(x,degree=2),data=db,weights=w)
return(predict(reg,newdata=data.frame(x=x0)))}

Of course, we can change the bandwidth

To conclude the technical part this post, observe that, in practise, we have to choose the shape of the weight function (the so-called kernel). But there are (simple) technique to select the “optimal” bandwidth h. The idea of cross validation is to consider

where  is the prediction obtained using a local regression technique, with bandwidth . And to get a more accurate (and optimal) bandwith  is obtained using a model estimated on a sample where the ith observation was removed. But again, that is not the main point in this post, so let’s keep that for another one…

Perhaps we can try on some real data? Inspired from a great post on http://f.briatte.org/teaching/ida/092_smoothing.html, by François Briatte, consider the Global Episode Opinion Survey, from some TV show, http://geos.tv/index.php/index?sid=189 , like Dexter.

library(XML)
library(downloader)
file = "geos-tww.csv"
html = htmlParse("http://www.geos.tv/index.php/list?sid=189&collection=all")
html = xpathApply(html, "//table[@id='collectionTable']")[[1]]
data = readHTMLTable(html)
data = data[,-3]
names(data)=c("no",names(data)[-1])
data=data[-(61:64),]

Let us reshape the dataset,

data$no = 1:96
data$mu = as.numeric(substr(as.character(data$Mean), 0, 4))
data$se =  sd(data$mu,na.rm=TRUE)/sqrt(as.numeric(as.character(data$Count)))
data$season = 1 + (data$no - 1)%/%12
data$season = factor(data$season)
plot(data$no,data$mu,ylim=c(6,10))
segments(data$no,data$mu-1.96*data$se,
data$no,data$mu+1.96*data$se,col="light blue")

As done by François, we compute some kind of standard error, just to reflect uncertainty. But we won’t really use it.

plot(data$no,data$mu,ylim=c(6,10))
abline(v=12*(0:8)+.5,lty=2)
for(s in 1:8){reg=lm(mu~no,data=db,subset=season==s)
lines((s-1)*12+1:12,predict(reg)[1:12],col="red") }

Henre, we assume that all seasons should be considered as completely independent… which might not be a great assumption.

db = data
NW = ksmooth(db$no,db$mu,kernel = "normal",bandwidth=5)
plot(data$no,data$mu)
lines(NW,col="red")

We can try to look the curve with a larger bandwidth. The problem is that there is a missing value, at the end. If we (arbitrarily) fill it, we can run a kernel regression,

db$mu[95]=7
NW = ksmooth(db$no,db$mu,kernel = "normal",bandwidth=12) 
plot(data$no,data$mu,ylim=c(6,10)) 
lines(NW,col="red")