Tag Archives: Taylor

Some heuristics about spline smoothing

Let us continue our discussion on smoothing techniques in regression. Assume that . where is some unkown function, but assumed to be sufficently smooth. For instance, assume that  is continuous, that exists, and is continuous, that  exists and is also continuous, etc. If  is smooth enough, Taylor’s expansion can be used. Hence, for https://latex.codecogs.com/gif.latex?x\in(\alpha,\beta)

which can also be writen as

for some https://latex.codecogs.com/gif.latex?a_k‘s. The first part is simply a polynomial.

The second part, is some integral. Using Riemann integral, observe that

for some https://latex.codecogs.com/gif.latex?b_i‘s, and some

Thus,

Nice! We have our linear regression model. A natural idea is then to consider a regression of https://latex.codecogs.com/gif.latex?Y on https://latex.codecogs.com/gif.latex?\boldsymbol{X} where

https://latex.codecogs.com/gif.latex?\boldsymbol{X}%20=%20(1,X,X^2,\cdots,X^d,(X-x_1)_+^d,\cdots,(X-x_k)_+^d%20)

given some knots https://latex.codecogs.com/gif.latex?\{x_1,\cdots,x_k\}. To make things easier to understand, let us work with our previous dataset,

plot(db)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_146.png

If we consider one knot, and an expansion of order 1,

attach(db)
library(splines)
B=bs(xr,knots=c(3),Boundary.knots=c(0,10),degre=1)
reg=lm(yr~B)
lines(xr[xr<=3],predict(reg)[xr<=3],col="red")
lines(xr[xr>=3],predict(reg)[xr>=3],col="blue")

The prediction obtained with this spline can be compared with regressions on subsets (the doted lines)

reg=lm(yr~xr,subset=xr<=3)
lines(xr[xr<=3],predict(reg)[xr<=3],col="red",lty=2)
reg=lm(yr~xr,subset=xr>=3)
lines(xr[xr>=3],predict(reg),col="blue",lty=2)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_160.png

It is different, since we have here three parameters (and not four, as for the regressions on the two subsets). One degree of freedom is lost, when asking for a continuous model. Observe that it is possible to write, equivalently

reg=lm(yr~bs(xr,knots=c(3),Boundary.knots=c(0,10),degre=1),data=db)

So, what happened here?

B=bs(xr,knots=c(2,5),Boundary.knots=c(0,10),degre=1)
matplot(xr,B,type="l")
abline(v=c(0,2,5,10),lty=2)

Here, the functions that appear in the regression are the following

http://freakonometrics.hypotheses.org/files/2013/10/Selection_161.png

Now, if we run the regression on those two components, we get

B=bs(xr,knots=c(2,5),Boundary.knots=c(0,10),degre=1)
matplot(xr,B,type="l")
abline(v=c(0,2,5,10),lty=2)

If we add one knot, we get

http://freakonometrics.hypotheses.org/files/2013/10/Selection_162.png

the prediction is

reg=lm(yr~B)
lines(xr,predict(reg),col="red")

http://freakonometrics.hypotheses.org/files/2013/10/Selection_147.png

Of course, we can choose much more knots,

B=bs(xr,knots=1:9,Boundary.knots=c(0,10),degre=1)
reg=lm(yr~B)
lines(xr,predict(reg),col="red")

http://freakonometrics.hypotheses.org/files/2013/10/Selection_148.png

We can even get a confidence interval

reg=lm(yr~B)
P=predict(reg,interval="confidence")
plot(db,col="white")
polygon(c(xr,rev(xr)),c(P[,2],rev(P[,3])),col="light blue",border=NA)
points(db)
reg=lm(yr~B)
lines(xr,P[,1],col="red")
abline(v=c(0,2,5,10),lty=2)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_149.png

And if we keep the  two knots we chose previously, but consider Taylor’s expansion of order 2, we get

B=bs(xr,knots=c(2,5),Boundary.knots=c(0,10),degre=2)
matplot(xr,B,type="l")
abline(v=c(0,2,5,10),lty=2)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_163.png

So, what’s going on? If we consider the constant, and the first component of the spline based matrix, we get

k=2
plot(db)
B=cbind(1,B)
lines(xr,B[,1:k]%*%coefficients(reg)[1:k],col=k-1,lty=k-1)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_164.png

If we add the constant term, the first term and the second term, we get the part on the left, before the first knot,

k=3
lines(xr,B[,1:k]%*%coefficients(reg)[1:k],col=k-1,lty=k-1)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_165.png

and with three terms from the spline based matrix, we can get the part between the two knots,

k=4
lines(xr,B[,1:k]%*%coefficients(reg)[1:k],col=k-1,lty=k-1)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_166.png

and finallty, when we sum all the terms, we get this time the part on the right, after the last knot,

k=5
lines(xr,B[,1:k]%*%coefficients(reg)[1:k],col=k-1,lty=k-1)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_167.png

This is what we get using a spline regression, quadratic, with two (fixed) knots. And can can even get confidence intervals, as before

reg=lm(yr~B)
P=predict(reg,interval="confidence")
plot(db,col="white")
polygon(c(xr,rev(xr)),c(P[,2],rev(P[,3])),col="light blue",border=NA)
points(db)
reg=lm(yr~B)
lines(xr,P[,1],col="red")
abline(v=c(0,2,5,10),lty=2)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_168.png

The great idea here is to use functions https://latex.codecogs.com/gif.latex?(x-x_i)_+, that will insure continuity at point https://latex.codecogs.com/gif.latex?x_i.

Of course, we can use those splines on our Dexter application,

http://freakonometrics.hypotheses.org/files/2013/10/Selection_170.png

Here again, using linear spline function, it is possible to impose a continuity constraint,

plot(data$no,data$mu,ylim=c(6,10))
abline(v=12*(0:8)+.5,lty=2)
reg=lm(mu~bs(no,knots=c(12*(1:7)+.5),Boundary.knots=c(0,97),
degre=1),data=db)
lines(c(1:94,96),predict(reg),col="red")

http://freakonometrics.hypotheses.org/files/2013/10/Selection_172.png

But we can also consider some quadratic splines,

plot(data$no,data$mu,ylim=c(6,10))
abline(v=12*(0:8)+.5,lty=2)
reg=lm(mu~bs(no,knots=c(12*(1:7)+.5),Boundary.knots=c(0,97),
degre=2),data=db)
lines(c(1:94,96),predict(reg),col="red")

http://freakonometrics.hypotheses.org/files/2013/10/Selection_171.png

Confidence interval for predictions with GLMs

Consider a (simple) Poisson regression http://freakonometrics.hypotheses.org/files/2016/11/poiss01.gif. Given a sample http://freakonometrics.hypotheses.org/files/2016/11/poiss02.gif where http://freakonometrics.hypotheses.org/files/2016/11/poiss03.gif, the goal is to derive a 95% confidence interval for http://freakonometrics.hypotheses.org/files/2016/11/poiss04.gif given http://freakonometrics.hypotheses.org/files/2016/11/poiss05.gif, where http://freakonometrics.hypotheses.org/files/2016/11/poiss04.gif is the prediction. Hence, we want to derive a confidence interval for the prediction, not the potential observation, i.e. the dot on the graph below

> r=glm(dist~speed,data=cars,family=poisson)
> P=predict(r,type="response",
+ newdata=data.frame(speed=seq(-1,35,by=.2)))
> plot(cars,xlim=c(0,31),ylim=c(0,170))
> abline(v=30,lty=2)
> lines(seq(-1,35,by=.2),P,lwd=2,col="red")
> P0=predict(r,type="response",se.fit=TRUE,
+ newdata=data.frame(speed=30))
> points(30,P1$fit,pch=4,lwd=3)

i.e.

Let http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif denote the maximum likelihood estimator of http://freakonometrics.hypotheses.org/files/2016/11/poiss07.gif. Then
http://freakonometrics.hypotheses.org/files/2016/11/poiss40.gif
where http://freakonometrics.hypotheses.org/files/2016/11/poiss101.gif is Fisher information of http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif (from standard maximum likelihood theory). Recall that
http://freakonometrics.hypotheses.org/files/2016/11/poiss13.gif
where computation of those values is based on the following calculations
http://freakonometrics.blog.fre<br /><br /> e.fr/public/latex/poiss21.gif
In the case of the log-Poisson regression
http://freakonometrics.hypotheses.org/files/2016/11/poiss36.gif
Let us get back to our initial problem.

  • confidence interval for the linear combination

A first idea to get a confidence interval for http://freakonometrics.hypotheses.org/files/2016/11/poiss49.gif is to get a confidence interval for http://freakonometrics.hypotheses.org/files/2016/11/poiss100.gif (by taking exponential values of bounds, since the exponential is a monotone function). Asymptotically, we know that
http://freakonometrics.hypotheses.org/files/2016/11/poiss40.gif

thus, an approximation for the variance matrix of http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif will be based on http://freakonometrics.hypotheses.org/files/2016/11/poiss45.gif, obtained by plugging estimators of the parameters.
Then, since http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif as an asymptotic multivariate distribution, any linear combination of the parameters will also be normal, i.e.
http://freakonometrics.hypotheses.org/files/2016/11/poiss47.gif has a normal distribution, centered on http://freakonometrics.hypotheses.org/files/2016/11/poiss49.gif, with variance http://freakonometrics.hypotheses.org/files/2016/11/poiss102.gif where http://freakonometrics.hypotheses.org/files/2016/11/Poiss110.gif is the variance of http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif. All those quantities can be easily computed. First, we can get the variance of the estimators

> i1=sum(predict(reg,type="response"))
> i2=sum(cars$speed*predict(reg,type="response"))
> i3=sum(cars$speed^2*predict(reg,type="response"))
> I=matrix(c(i1,i2,i2,i3),2,2)
> V=solve(I)

Hence, if we compare with the output of the regression,

> summary(reg)$cov.unscaled
(Intercept)         speed
(Intercept)  0.0066870446 -3.474479e-04
speed       -0.0003474479  1.940302e-05
> V
[,1]          [,2]
[1,]  0.0066871228 -3.474515e-04
[2,] -0.0003474515  1.940318e-05

Based on those values, it is easy to derive the standard deviation for the linear combination,

> x=30
> P2=predict(r,type="link",se.fit=TRUE,
+ newdata=data.frame(speed=x))
> P2
$fit
1
5.046034

$se.fit
[1] 0.05747075

$residual.scale
[1] 1

> sqrt(V[1,1]+2*x*V[2,1]+x^2*V[2,2])
[1] 0.05747084
> sqrt(t(c(1,x))%*%V%*%c(1,x))
[,1]
[1,] 0.05747084

And once we have the standard deviation, and normality (at least asymptotically), confidence intervals are derived, and then, taking the exponential of the bounds, we get confidence interval

> segments(30,exp(P2$fit-1.96*P2$se.fit),
+ 30,exp(P2$fit+1.96*P2$se.fit),col="blue",lwd=3)

Based on that technique, confidence intervals are no longer centered on the prediction. But who cares ?

  • delta method

Actually, those who like to use “more or less” expressions for confidence intervals will not like non centered intervals. So, an alternative is to use the delta method. Instead of writing (again) something on the theory, we can use a package which computes that method,

> estmean=t(c(1,x))%*%coef(reg)
> var=t(c(1,x))%*%summary(reg)$cov.unscaled%*%c(1,x)
> library(msm)
> deltamethod (~ exp(x1), estmean, var)
[1] 8.931232
> P1=predict(r,type="response",se.fit=TRUE,
+ newdata=data.frame(speed=30))
> P1
$fit
1
155.4048

$se.fit
1
8.931232

$residual.scale
[1] 1

The delta method gives us (asymptotic) normality, so once we have a standard deviation, we get the confidence interval.

> segments(30,P1$fit-1.96*P1$se.fit,30,
+ P1$fit+1.96*P1$se.fit,col="blue",lwd=3)

Note that those quantities – obtained with two different approaches – are rather close here

> exp(P2$fit-1.96*P2$se.fit)
1
138.8495
> P1$fit-1.96*P1$se.fit
1
137.8996
> exp(P2$fit+1.96*P2$se.fit)
1
173.9341
> P1$fit+1.96*P1$se.fit
1
172.9101
  • bootstrap techniques

And a third method (but far from what I expect to teach on that course) is to use bootstrap techniques to about those results based on asymptotic normality (we have only 50 observations). The idea is to sample from out dataset, and to run a log-Poisson regression on those new samples, and to repeat a lot of time,