In a post publihed in July, I mentioned the so called the Goldilocks principle, in the context of kermel density estimation, and bandwidth selection. The bandwith should not be too small (the variance would be too large) and it should not be too large (the bias would be too large). Another standard method to select the bandwith, as mentioned this afternoon in class is the cross-validation technique (described in Chiu (1991)). Here, we would like to minimize
The integral can be writen
Since the third component is constant, we have to minimize the expected value of the sum of the first two.
The idea is to approximate it as
which can easily be computed. Consider here some sample, with 50 observations, from a Gaussian distribution,
> set.seed(1) > X=rnorm(50)
From Silverman’s rule of thumb (which should be appropriate here since the sample has a Gaussian sample) the optimal bandwidth is
> 1.06*sd(X)*length(X)^(-1/5) [1] 0.4030127
Using the cross-validation technique mentioned above, compute
> J=function(h){ + fhat=Vectorize(function(x) density(X,from=x,to=x,n=1,bw=h)$y) + fhati=Vectorize(function(i) density(X[-i],from=X[i],to=X[i],n=1,bw=h)$y) + F=fhati(1:length(X)) + return(integrate(function(x) fhat(x)^2,-Inf,Inf)$value-2*mean(F)) + } > vx=seq(.1,1,by=.01) > vy=Vectorize(J)(vx) > df=data.frame(vx,vy) > library(ggplot2) > qplot(vx,vy,geom="line",data=df)
The function has the following shape
and the optimal value is
> optimize(J,interval=c(.1,1)) $minimum [1] 0.4687553 $objective [1] -0.3355477
Note that, indeed, it is close to Siverman’s optimal bandwidth.
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (October 1, 2014). Cross Validation for Kernel Density Estimation. Freakonometrics. Retrieved November 2, 2024 from https://doi.org/10.58079/oux1
this is insane code.
Thank you for your great post.
When I ran your code it resulted in
set.seed(20160705)
n <- 100
X <- rnorm(n)
optimize(J,interval=c(.1,1))
minimum
[1] 0.5030485
$objective
[1] -0.2610623
Is it plausible bandwidth?