Heuristics on bias and variance for kernel density estimators

Consider the simple case of a moving histogram (which is a very simple kernel). The idea is to recall that

where

is the slope close to point .

Then we use the empirical cumulative density to approximate the slope, i.e.

which can also be writen

Consider now the density seen as a random variable

where the‘s are i.i.d. where , with

Thus, observe that , but that’s not what we’re looking for… From Taylor’s expansion,

thus

where the bias comes from the approximation of the density by some string. About the variance,

thus, since ,

i.e.

We can observe that

is decreasing as , while the variance is increasing as . This is the standard bias-variance tradeoff in statistics.


OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (September 24, 2015). Heuristics on bias and variance for kernel density estimators. Freakonometrics. Retrieved February 18, 2025 from https://doi.org/10.58079/ov0j


One thought on “Heuristics on bias and variance for kernel density estimators”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.