Tag Archives: cross-validation

Lilliefors, Kolmogorov-Smirnov and cross-validation

In statistics, Kolmogorov–Smirnov test is a popular procedure to test, from a sample \{x_1,\cdots,x_n\} is drawn from a distribution F, or usually F_{\theta_0}, where F_{\theta} is some parametric distribution. For instance, we can test H_0:X_i\sim\mathcal{N(0,1)} (where \theta_0=(\mu_0,\sigma_0^2)=(0,1)) using that test. More specifically, I wanted to discuss today p-values. Given n let us draw \mathcal{N}(0,1) samples of size n, and compute the p-values of Kolmogorov–Smirnov tests

n=300
p = rep(NA,1e5)
for(s in 1:1e5){
X = rnorm(n,0,1)
p[s] = ks.test(X,"pnorm",0,1)$p.value
}

We can visualise the distribution of the p-values below (I added some Beta distribution fit here)

library(fitdistrplus)
fit.dist = fitdist(p,"beta")
hist(p,probability = TRUE,main="",xlab="",ylab="")
vu = seq(0,1,by=.01)
vv = dbeta(vu,shape1 = fit.dist$estimate[1], shape2 = fit.dist$estimate[2])
lines(vu,vv,col="dark red", lwd=2)

It looks like it is quite uniform (theoretically, the p-value is uniform). More specifically, the p-value was lower than 5% in 5% of the samples

[note: here I compute ‘mean(p<=.05)’ but I have some trouble with the ‘<‘ and ‘>’ symbols, as always]

mean(p&lt;=.05)
[1] 0.0479

i.e. we wrongly reject H_0:X_i\sim\mathcal{N(0,1)} is 5% of the samples.

As discussed previously on the blog, in many cases, we do care about the distribution, and not really the parameters, so we wish to test something like H_0:X_i\sim\mathcal{N(\mu,\sigma^2)}, for some \mu and \sigma^2. Therefore, a natural idea can be to test H_0:X_i\sim\mathcal{N(\hat\mu,\hat\sigma^2)}, for some estimates of \mu and \sigma^2. That’s the idea of Lilliefors test. More specifically, Lilliefors test suggests to use , Kolmogorov–Smirnov statistics, but corrects the p-value. Indeed, if we draw many samples, and use Kolmogorov–Smirnov statistics and its classical p-value to test for H_0:X_i\sim\mathcal{N(\hat\mu,\hat\sigma^2)},

n=300
p = rep(NA,1e5)
for(s in 1:1e5){
X = rnorm(n,0,1)
p[s] = ks.test(X,"pnorm",mean(X),sd(X))$p.value
}

we see clearly that the distribution of p-values is no longer uniform

fit.dist = fitdist(p,"beta")
hist(p,probability = TRUE,main="",xlab="",ylab="")
vu = seq(0,1,by=.01)
vv = dbeta(vu,shape1 = fit.dist$estimate[1], shape2 = fit.dist$estimate[2])
lines(vu,vv,col="dark red", lwd=2)

More specifically, if x_i‘s are actually drawn from some Gaussian distribution, there are no chance to reject H_0, the p-value being almost never below 5%

mean(p&lt;=.05)
[1] 0.00012

Usually, to interpret that result, the heuristics is that \hat\mu and \hat\sigma^2 are both based on the sample, while previously 0 and 1 where based on some prior knowledge. Somehow, it reminded me on the classical problem when mention when we introduce cross-validation, which is Goodhart’s law

When a measure becomes a target, it ceases to be a good measure

i.e. we cannot assess goodness of fit using the same data as the ones used to estimate parameters. So here, why not use some hold-out (or cross-validation) procedure : split the dataset in two parts, \{x_1,\cdots,x_k\} (with k<n) to estimate parameters \mu and \sigma^2 and then use \{x_{k+1},\cdots,x_n\} and Kolmogorov–Smirnov statistics on it to test if x_i‘s are drawn from some Gaussian distribution. More precisely, will the p-value computed using the standard Kolmogorov–Smirnov procedure be ok here. Here, I tried two scenarios, k/n being either 1/3 or 2/3,

p = matrix(NA,1e5,4)
for(s in 1:1e5){
X = rnorm(n,0,1)
p[s,1] = ks.test(X,"pnorm",0,1)$p.value
p[s,2] = ks.test(X,"pnorm",mean(X),sd(X))$p.value
p[s,3] = ks.test(X[1:200],"pnorm",mean(X[201:300]),sd(X[201:300]))$p.value
p[s,4] = ks.test(X[201:300],"pnorm",mean(X[1:200]),sd(X[1:200]))$p.value
}

Again, we can visualize the distributions of p-values,  in the case where 1/3 of the data is used to estimate \mu and \sigma^2, and 2/3 of the data is used to test

fit.dist = fitdist(p[,3],"beta")
hist(p[,3],probability = TRUE,main="",xlab="",ylab="")
vu=seq(0,1,by=.01)
vv=dbeta(vu,shape1 = fit.dist$estimate[1], shape2 = fit.dist$estimate[2])
lines(vu,vv,col="dark red", lwd=2)


and in the case where 2/3 of the data is used to estimate \mu and \sigma^2, and 1/3 of the data is used to test

fit.dist = fitdist(p[,4],"beta")
hist(p[,4],probability = TRUE,main="",xlab="",ylab="")
vu=seq(0,1,by=.01)
vv=dbeta(vu,shape1 = fit.dist$estimate[1], shape2 = fit.dist$estimate[2])
lines(vu,vv,col="dark red", lwd=2)


Observe here that we (wrongly) reject too frequently H_0, since the p-values are  below 5% in 25% of the scenarios, in the first case (less data used to estimate), and 9% of the scenarios, in the second case (less data used to test)

mean(p[,3]&lt;=.05)
[1] 0.24168
mean(p[,4]&lt;=.05)
[1] 0.09334

We can actually compute that probability as a function of k/n

n=300
p = matrix(NA,1e4,99)
for(s in 1:1e4){
  X = rnorm(n,0,1)
  KS = function(p) ks.test(X[1:(p*n)],"pnorm",mean(X[(p*n+1):n]),sd(X[(p*n+1):n]))$p.value
  p[s,] = Vectorize(KS)((1:99)/100)
}

The evolution of the probability is the following

prob5pc = apply(p,2,function(x) mean(x&lt;=.05))
plot((1:99)/100,prob5pc)

so, it looks like we can use some sort of hold-out procedure to test for H_0:X_i\sim\mathcal{N(\mu,\sigma^2)}, for some \mu and \sigma^2, using Kolmogorov–Smirnov test with \mu=\hat\mu and \sigma^2=\hat\sigma^2 but the proportion of data used to estimate those quantities should be (much) larger that the one used to compute the statistics. Otherwise, we clearly reject too frequently H_0.

Random thoughts on econometric models with (pure) random features

For my lectures on applied linear models, I wanted to illustrate the fact that the R^2 is never a good measure of the goodness of the model, since it’s quite easy to improve it. Consider the following dataset

n=100
df=data.frame(matrix(rnorm(n*n),n,n))
names(df)=c("Y",paste("X",1:99,sep=""))

with one variable of interest y, and 99 features x_j. All of them being (by construction) independent. And we have 100 observations… Consider here the regression on the first k features, and compute R_k^2 of that regression

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  summary(model)$adj.r.squared}

Let us see what’s going on…

plot(1:99,Vectorize(reg)(1:99))

(actually, it’s not exactly what we have on the graph…. we have the average obtained over 1,000 samples randomly generated, with 90% confidence bands). Oberve that \mathbb{E}[R^2_k]=k/n, i.e. if we add some pure random noise, we keep increasing the R^2 (up to 1, actually).

Good news, as we’ve seen in the course, the adjusted R^2 – denoted \bar R^2-might help. Observe that \mathbb{E}[\barR^2_k]=0, so, in some sense, adding features does not help here…

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  summary(model)$r.squared}
plot(1:99,Vectorize(reg)(1:99))

We can actually do the same with Akaike criteria AIC_k and Schwarz (bayesian) criteria BIC_k.

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  AIC(model)}
plot(1:99,Vectorize(reg)(1:99))

For the AIC, the intitial increase makes sense : we should not prefer the model with 10 covariates, compared with nothing. The strange thing is the far right behavior : we prefer here 80 random noise features to none ! Which I find hard to interprete… For the BIC the code is simply

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  BIC(model)}
plot(1:99,Vectorize(reg)(1:99))

and here also, we have the same pattern, where we prefer a big model with juste pure noise to nothing…

A last one to conclude (or not) : what about the leave-one-out cross validation mean squared error ? More precisely, CV=\frac{1}{n}\sum_{i=1}\widehat{\varepsilon}^2_{-i}where \widehat{\varepsilon}^2_{-i}=y_i-\widehat{y}_{-i} where \widehat{y}_{-i} is the predicted value obtained with the model is estimated when the ith observation is deleted. One can prove that \widehat{\beta}_{-i}=\widehat{\beta}-(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{x}_i\hat\varepsilon_i(1-H_{i,i})^{-1}where H is the classical hat matrix, thus\widehat{\varepsilon}_{-i}=(1-H_{i,i})^{-1}\hat\varepsilon_ii.e. we do note have to estimate (at each round) n models

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  h=lm.influence(model)$hat/2
  mean( (residuals(model)/1-h)^2 ))}
plot(1:99,Vectorize(reg)(1:99))

Here, it make sense : adding noisy features yields overfit ! So the mean squared error is decreasing !

That’s all nice, but it might not be very realistic… Here, for my model with only one variable, I just pick one, at random…. In practice, we try to get the “best one”… So a more natural idea would be to order the variables according to their correlations with y,

df=data.frame(matrix(rnorm(n*n),n,n))
  df=df[,rev(order(abs(cor(df)[1,])))]
  names(df)=c("Y",paste("X",1:99,sep=""))}

and as before, we can plot the evolution of R^2_k as a function of k the number of features considered,

which is increasing, with a higher slope at the beginning… For the \bar R^2_k we might actually prefer a correlated noise to nothing (which makes sense actually). So here since we somehow chose our variables, \bar R^2_k seems to be always positive…

For the AIC_k here also, there is an improvement. Before coming back to the original situation (with about 80 features) and here also, we observe the drop on the far right part of the graph

The BIC_k might like the top three features, but soon, we have a deterioration…. even if here also, we have the drop at the far right (with more than 95 features… for 100 observations).

Finally, observe that here again, our (leave-one-out) cross-validation has not been mesled by our noisy variables : it is always decreasing !

So it seems that cross-validation techniques are more robust than the AIC and BIC (even if we mentioned in a previous post connexions between all those concepts) when we have a lot a noisy (non-relevent) features.

Foundations of Machine Learning, part 5

This post is the nineth (and probably last) one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 8 is online here.

Optimization and algorithmic aspects

In econometrics, (numerical) optimization became omnipresent as soon as we left the Gaussian model. We briefly mentioned it in the section on the exponential family, and the use of the Fisher score (gradient descent) to solve the first order condition \mathbf{X}^T W(\beta)^{-1})[y-\widehat{y}]=\mathbf{0}. In learning, optimization is the central tool. And it is necessary to have effective optimization algorithms, to solve problems (described previously) of the form: \widehat{\beta}\in\underset{\beta\in\mathbb{R}^p}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta)+\lambda\Vert\boldsymbol{\beta}\Vert\right\rbraceIn some cases, instead of global optimization, it is sufficient to consider optimization by coordinates (widely studied in Daubechies et al. (2004)). If f:\mathbb{R}^d\rightarrow\mathbf{R} is convex and differentiable, if \mathbf{x} satisfies f(\mathbf{x}+h\boldsymbol{e}_i)\geq f(\mathbf{x}) for any h>0 and i\in\{1,\cdots, d\}then f(\mathbf{x})=\min\{f\}, where \mathbf{e}=(\mathbf{e}_i) is the canonical basis of \mathbb{R}^d. However, this property is not true in the non-differentiable case. But if we assume that the non-differentiable part is separable (additively), it becomes true again. More specifically, iff(\mathbf{x})=g(\mathbf{x})+\sum_{i=1}^d h_i(x_i)with\left\lbrace\begin{array}{l}g: \mathbb{R}^d\rightarrow\mathbb{R}\text{ convex-differentiable}\\h_i: \mathbb{R}\rightarrow\mathbb{R}\text{ convex}\end{array}\right.This was the case for Lasso regression, \beta)\mapsto\| \mathbf{y}-\beta_0-\mathbf{X}\beta\|_{\ell_2 }+\lambda\|\beta\|_{\ell_1}, as shown by Tsen (2001). Getting back to our initial notations, we can use a coordinate descent algorithm: from an initial value \mathbf{x}^{(0)}, we consider (by iterating)x_j^{(k)}\in\text{argmin}\big\lbrace f(x_1^{(k)},\cdots,x_{k-1}^{(k)},x_k,x_{k+1}^{(k-1)},\cdots,x_n^{(k-1)})\big\rbrace for j=1,2,\cdots,nThese algorithmic problems and numerical issues may seem secondary to econometricians. However, they are essential in automatic learning: a technique is interesting if there is a stable and fast algorithm, which allows to obtain a solution. These optimization techniques can be transposed: for example, this coordinate descent technique can be used in the case of SVM methods (known as “vector support” methods) when the space is not linearly separable, and the classification error must be penalized (we will come back to this technique in the next section).

In-sample, out-of-sample and cross-validation

These techniques seem intellectually interesting, but we have not yet discussed the choice of the penalty parameter \lambda. But this problem is actually more general, because comparing two parameters \widehat{\beta}_{\lambda_1} and \widehat{\beta}_{\lambda_2} is actually comparing two models. In particular, if we use a Lasso method, with different thresholds \lambda, we compare models that do not have the same dimension. Previously, we have addressed the problem of model comparison from an econometric perspective (by penalizing overly complex models). In the learning literature, judging the quality of a model on the data used to construct it does not make it possible to know how the model will behave on new data. This is the so-called “generalization” problem. The traditional approach then consists in separating the sample (size n) into two parts: a part that will be used to train the model (the training database, in-sample, size m) and a part that will be used to test the model (the testing database, out-of-sample, size n-m). The latter then makes it possible to measure a real predictive risk. Suppose that the data are generated by a linear model y_i=\mathbf{x}_i^T \beta_0+\varepsilon_i where \varepsilon_i are independent and centred law achievements. The empirical quadratic risk in-sample is here\frac{1}{m}\sum_{i=1}^m\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big)=\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big),for any observation i. Assuming the residuals \varepsilon Gaussian, then we can show that this risk is worth \sigma^2 \text{trace} (\Pi_X)/m is \sigma^2 p/m. On the other hand, the empirical out-of-sample quadratic risk is here \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) where \mathbf{x} is a new observation, independent of the others. It can be noted that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big\vert \mathbf{x}\big)=\text{Var}\big(\mathbf{x}^T \widehat{\beta}\big\vert \mathbf{x}\big)=\sigma^2\mathbf{x}^T(\mathbf{x}^T\mathbf{x})^{-1}\mathbf{x},and by integrating with respect to \mathbf{x}, \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T\beta_0]^2\big)=\sigma^2\text{trace}\big(\mathbb{E}[\mathbf{x}\mathbf{x}^T]\mathbb{E}\big[(\mathbf{x}^T\mathbf{x})^{-1}\big]\big).The expression is then different from that obtained in-sample, and using the Groves & Rothenberg (1969) increase, we can show that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) \geq \sigma^2\frac{p}{m}which is pretty intuitive, when we start thinking about it. Except in some simple cases, there is no simple (explicit) formula. Note, however, that if \mathbf{X}\sim\mathcal{N}(0,\sigma^2 \mathbb{I}), then \mathbf{x}^T \mathbf{x} follows a Wishart law, and it can be shown that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big)=\sigma^2\frac{p}{m-p-1}.If we now look at the empirical version: if \widehat{\beta} is estimated on the first m observations,\widehat{\mathcal{R}}^{~\text{ IS}}=\sum_{i=1}^m [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2\text{ and }\widehat{\mathcal{R}}^{\text{ OS}}=\sum_{i=m+1}^{n} [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2and as Leeb (2008) noted, \widehat{\mathcal{R}}^{\text{IS}}-\widehat{\mathcal{R}}^{\text{OS}}\approx 2\cdot\nu where \nu represents the number of degrees of freedom, which is not unlike the penalty used in the Akaike test.

Figure 4 shows the respective evolution of \widehat{\mathcal{R}}^{\text{IS}} and \widehat{\mathcal{R}}^{\text{OS}} according to the complexity of the model (number of degrees in a polynomial regression, number of nodes in splines, etc). The more complex the model, the more \widehat{\mathcal{R}}^{\text{IS}} will decrease (this is the red curve, below). But that’s not what we’re interested in here: we want a model that predicts well on new data (i. e. out-of-sample). As Figure 4 shows, if the model is too simple, it does not predict well (as it does with in-sample data). But what we can see is that if the model is too complex, we are in a situation of “overlearning”: the model will start to model the noise. Of course, this figure should remind us of the one we’ve seen in our second post of that series

Figure 4 : Generalization, under- and over-fitting

Instead of splitting the database in two, with some of the data that will be used to calibrate the model and some to study its performance, it is also possible to use cross-validation. To present the general idea, we can go back to the “jackknife”, introduced by Quenouille (1949) (and formalized by Quenouille (1956) and Tukey (1958)) relatively used in statistics to reduce bias. Indeed, if we assume that \{y_1,\cdots,y_n\} is a sample drawn according to a law F_\theta, and that we have an estimator T_n (\mathbf{y})=T_n (y_1,\cdots,y_n), but that this estimator is biased, with \mathbf{E}[T_n (\mathbf{Y})]=\theta+O(n^{-1}), it is possible to reduce the bias by considering \widetilde{T}_n(\mathbf{y})=\frac{1}{n}\sum_{i=1}^n T_{n-1}(\mathbf{y}_{(i)})\text{ where }\mathbf{y}_{(i)}=(y_1,\cdots,y_{i-1},y_{i+1},\cdots,y_n)It can then be shown that \mathbb{E}[\tilde{T}_n(Y)]=\theta+O(n^{-2})The idea of cross-validation is based on the idea of building an estimator by removing an observation. Since we want to build a predictive model, we will compare the forecast obtained with the estimated model, and the missing observation\widehat{\mathcal{R}}^{\text{ CV}}=\frac{1}{n}\sum_{i=1}^n \ell(y_i,\widehat{m}_{(i)}(\mathbf{x}_i))We will speak here of the “leave-one-out” (loocv) method.

This technique reminds us of the traditional method used to find the optimal parameter in exponential smoothing methods for time series. In simple smoothing, we will construct a forecast from a time series as {}_t\widehat{y}_{t+1} =\alpha\cdot{}_{t-1}\widehat{y}_t +(1-\alpha)\cdot y_t, where \alpha\in[0,1], and we will consider as “optimal” \alpha^\star = \underset{\alpha\in[0,1]}{\text{argmin}}\left\lbrace \sum_{t=2}^T \ell({}_{t-1}\widehat{y}_{t},y_{t}) \right\rbraceas described by Hyndman et al (2009).

The main problem with the leave-one-out method is that it requires calibration of n models, which can be problematic in large dimensions. An alternative method is cross validation by k-blocks (called “k-fold cross validation”) which consists in using a partition of \{1,\cdots,n\} in k groups (or blocks) of the same size, \mathcal{I}_1,\cdots,\mathcal{I}_k, and let us note \mathcal{I}_{\bar j}=\{1,\cdots,n\}\setminus \mathcal{I}_j. By noting \widehat{m}_{(j)} built on the sample \mathcal{I}_{\bar j}, we then set:\widehat{\mathcal{R}}^{k-\text{ CV}}=\frac{1}{k}\sum_{j=1}^k \mathcal{R}_j\text{ where }\mathcal{R}_j=\frac{k}{n}\sum_{i\in\mathcal{I}_{{j}}} \ell(y_i,\widehat{m}_{(j)}(\mathbf{x}_i))Standard cross-validation, where only one observation is removed each time (loocv), is a special case, with k=n. Using k=5 or 10 has a double advantage over k=n: (1) the number of estimates to be made is much smaller, 5 or 10 rather than n; (2) the samples used for estimation are less similar and therefore less correlated to each other, which tends to avoid excess variance, as recalled by James et al. (2013).

Another alternative is to use boosted samples. Let \mathcal{I}_b be a sample of size n obtained by drawing with replacement in \{1,\cdots,n\} to know which observations (y_i,\mathbf{x}_i) will be kept in the learning population (at each draw). Note \mathcal{I}_{\bar b}=\{1,\cdots,n\}\setminus\mathcal{I}_b. By noting \widehat{m}_{(b)} built on sample \mathcal{I}_b, we then set :\widehat{\mathcal{R}}^{\text{ B}}=\frac{1}{B}\sum_{b=1}^B \mathcal{R}_b\text{ where }\mathcal{R}_b=\frac{n_{\overline{b}}}{n}\sum_{i\in\mathcal{I}_{\overline{b}}} \ell(y_i,\widehat{m}_{(b)}(\mathbf{x}_i))where n_{\bar b} is the number of observations that have not been kept in \mathcal{I}_b. It should be noted that with this technique, on average e^{-1}\sim36.7\% of the observations do not appear in the boosted sample, and we find an order of magnitude of the proportions used when creating a calibration sample, and a test sample. In fact, as Stone (1977) had shown, the minimization of AIC is to be compared to the cross-validation criterion, and Shao (1997) showed that the minimization of BIC corresponds to k-fold cross-validation, with k=n/\log n.

All those techniques here are mentioned in the “machine learning” section since they rely on automatic, computational techniques, and no probabilistic foundations are necessary. In many cases we did use the notation m^\star (at least in the first posts on “machine learning” techniques) to highlight the fact that we want some sort of “optimal” model – and to make a distinction with estimators \widehat{m} considered earlier, when we had some probabilistic framework. But of course, it is possible (and necessary) to build bridges between those two cultures…

References are online here. As explained in the introduction, it is some sort of online version of an introduction to our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics (in English and in French).

Variable Selection using Cross-Validation (and Other Techniques)

A natural technique to select variables in the context of generalized linear models is to use a stepŵise procedure. It is natural, but contreversial, as discussed by Frank Harrell  in a great post, clearly worth reading. Frank mentioned about 10 points against a stepwise procedure.

  • It yields R-squared values that are badly biased to be high.
  • The F and chi-squared tests quoted next to each variable on the printout do not have the claimed distribution.
  • The method yields confidence intervals for effects and predicted values that are falsely narrow (see Altman and Andersen (1989)).
  • It yields p-values that do not have the proper meaning, and the proper correction for them is a difficult problem.
  • It gives biased regression coefficients that need shrinkage (the coefficients for remaining variables are too large (see Tibshirani (1996)).
  • It has severe problems in the presence of collinearity.
  • It is based on methods (e.g., F tests for nested models) that were intended to be used to test prespecified hypotheses.
  • Increasing the sample size does not help very much (see Derksen and Keselman (1992)).
  • It allows us to not think about the problem.
  • It uses a lot of paper.

Continue reading Variable Selection using Cross-Validation (and Other Techniques)

Cross Validation for Kernel Density Estimation

In a post publihed in July, I mentioned the so called the Goldilocks principle, in the context of kermel density estimation, and bandwidth selection. The bandwith should not be too small (the variance would be too large) and it should not be too large (the bias would be too large). Another standard method to select the bandwith, as mentioned this afternoon in class is the cross-validation technique (described in Chiu (1991)). Here, we would like to minimize

https://latex.codecogs.com/gif.latex?\mathbb{E}\left[\int%20[\widehat{f}_h(x)-f(x)]^2dx\right]

The integral can be writen

https://latex.codecogs.com/gif.latex?\int%20\widehat{f}_h(x)^2dx-2\int%20\widehat{f}_h(x)f(x)dx+\int%20f(x)^2dx

Since the third component is constant, we have to minimize the expected value of the sum of the first two.

The idea is to approximate it as

https://latex.codecogs.com/gif.latex?J(h)=\int%20\widehat{f}_h(x)^2dx-\frac{2}{n}\sum_{i=1}^n%20\widehat{f}_{(-i)}(X_i)

which can easily be computed. Consider here some sample, with 50 observations, from a Gaussian distribution,

> set.seed(1)
> X=rnorm(50)

From Silverman’s rule of thumb (which should be appropriate here since the sample has a Gaussian sample) the optimal bandwidth is

> 1.06*sd(X)*length(X)^(-1/5)
[1] 0.4030127

Using the cross-validation technique mentioned above, compute

> J=function(h){
+ fhat=Vectorize(function(x) density(X,from=x,to=x,n=1,bw=h)$y)
+ fhati=Vectorize(function(i) density(X[-i],from=X[i],to=X[i],n=1,bw=h)$y)
+ F=fhati(1:length(X))
+ return(integrate(function(x) fhat(x)^2,-Inf,Inf)$value-2*mean(F))
+ }
> vx=seq(.1,1,by=.01)
> vy=Vectorize(J)(vx)
> df=data.frame(vx,vy)
> library(ggplot2)
> qplot(vx,vy,geom="line",data=df)

The function has the following shape

and the optimal value is

> optimize(J,interval=c(.1,1))
$minimum
[1] 0.4687553

$objective
[1] -0.3355477

Note that, indeed, it is close to Siverman’s optimal bandwidth.