Statistics, and the Goldilocks Principle

By the end of May, in Toronto, we had that great talk at the SSC by Jeff Rosenthal, on monte carlo techniques, and Jeff mention the name of “the Goldilocks principle” (it was in the contect of MCMC, and I did mention it in my talk in London on MCMC, when I discussed the value of the rejection rate of the Hastings Metropolis algorithm, which should be not to large, and not too small…). In the story, Goldilocks, there are always three alternative, one is always too much in one extreme (too hot – for the soup – or too large – for the bed, or the chaiir), one is too much in the opposite extreme (too cold, or too small), and one is “just right“.

The principle is everywhere in statistics, where a parameter has to be fixed, and usually, ir should be not too large and not too small. In one case, there will be a small variance, and a large bias, and in the other case, there will be a small bias and a large variance. It should be “just right“. For instance on kernel smoothing, for density estimation. The standard estimator is

where  is some kernel, and  is a bandwidth. The value for the bandwidth should not be too large,

> n=100
> bsilverman=1.06*n^(-1/5)
> bsilverman
[1] 0.4219936
> set.seed(1)
> sample_X=matrix(rnorm(n*10),n,10)
> b=2*bsilverman
> u=seq(-3,3,by=.01)
> plot(u,dnorm(u),col="red",
+ type="l",ylim=c(0,.5),lwd=2)
> for(i in 1:10){
+ lines(density(sample_X[,i],
+ bw=b),col="blue")
+ }

(because here, it is too smooth, with a small variance, but a strong bias, if we compare the blue lines – based on various samples – and the red line – the true density) it should not be too small,

> b=bsilverman/2
> plot(u,dnorm(u),col="red",
+ type="l",ylim=c(0,.5),lwd=2)
> for(i in 1:10){
+ lines(density(sample_X[,i],
+ bw=b),col="blue")
+ }

(because here, it is to heratic, with a large variance, but a small bias, if we compare the blue lines and the red one) it has to be “just right“.

> b=bsilverman
> plot(u,dnorm(u),col="red",
+ type="l",ylim=c(0,.5),lwd=2)
> for(i in 1:10){
+ lines(density(sample_X[,i],
+ bw=b),col="blue")
+ }

The standard idea is to minimize the integrated mean squared error

and the optimal value is

In the case of a Gaussian sample (that we consider here to illustrate), we have Silverman‘s rule

In order to illustrate it, let use consider the optmal value obtained from 1,000 samples,

> nsim=1000
> mat_sample_x=matrix(rnorm(n*nsim),n,nsim)
> vec_fh=function(x,b=.3){
+ sapply(1:nsim,function(i){
+  density(mat_sample_x[,i],bw=b,
+  from=x,to=x,n=1)$y})}
> f=function(x){
+ dnorm(x)}
> mise1=
+ Vectorize(function(bw){
+ integrate(function(x){
+ (mean(vec_fh(x,b=bw))-f(x))^2},
+ lower=-Inf,upper=Inf)$value})
> mise2=Vectorize(function(bw){
+ integrate(Vectorize(function(x){
+ var(vec_fh(x,b=bw))}),
+ lower=-Inf,upper=Inf)$value})
> mise=function(b){mise1(b)+mise2(b)}

If we minimise the average (instead of the expected) integrated squared error, we get

> optimize(mise,c(0,2))
$minimum
[1] 0.4465461

$objective
[1] 0.03335771

(which is not far away from Silverman’s value), and if we plot the two parts of the MISE (the bias and the variance), we get

> B=seq(0.02,2,by=.02)
> Ymise= mise(B)
> Ymise1=mise1(B)
> Ymise2=mise2(B)
> plot(B,Ymise,type="l",col="black",lwd=3)
> lines(B,Ymise1,col="red")
> lines(B,Ymise2,col="blue")

We clearly observe an optimal value, not too small (the estimator has to much variance) and not too large (the bias is too large), it has to be “just right“.



Cite this blog post
Arthur Charpentier (2014, July 26). Statistics, and the Goldilocks Principle. Freakonometrics. Retrieved March 19, 2024, from https://doi.org/10.58079/ouwb

One thought on “Statistics, and the Goldilocks Principle”

  1. I have been conducting numerical experiments on kernel density estimation with Gaussian kernel and Silverman’s rule for the bandwidth. I observed a behavior that I do not completely understand, but I could not find any explanation by searching on the Internet (only here: https://stats.stackexchange.com/questions/432487/behavior-of-kernel-density-estimation). Suppose that $X$ follows a Normal distribution of mean 0 and variance 1, and let $f_X$ be its density function. We study the convergence of the kernel density estimate with $M$ data points, $\hat{f}_X^M$. Consider the error $E[\|f_X-\hat{f}_X^M\|_\infty]$, where $E$ denotes the expectation, which is estimated via a sample average. In log-log scale, I observed that the error decreases linearly with $M$, but from a certain size $M_0$ it starts to stabilize for $M\geq M_0$. I thought that there was convergence as $M\rightarrow\infty$. Does the stabilization occur because of accumulated numerical errors? Is the bandwidth too small for large $M$?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.