A common idea in statistics is that if we don’t know something, and we use anestimator of that something (instead of the true value) then there will be some additional uncertainty. For instance, consider a random sample, i.i.d., from a Gaussian distribution. Then, a confidence interval for the mean is
and the cost we have to pay is that the new confidence interval is
where now is the quantile of the Student distribution, of probability level , with degrees of freedom.
We call it a cost since the new confidence interval is now larger (the Student distribution has higher upper-quantiles than the Gaussian distribution).
So usually, if we substitute an estimation to the true value, there is a price to pay.
A few years ago, with Jean David Fermanian and Olivier Scaillet, we were writing a survey on copula density estimation (using kernels, here). At the end, we wanted to add a small paragraph on the fact that we assumed that we wanted to fit a copula on a sample i.i.d. with distribution , a copula, but in practice, we start from a sample with joint distribution (assumed to have continuous margins, and – unique – copula ). But since margins are usually unknown, there should be a price for not observing them.
To be more formal, in a perfect wold, we would consider
but in the real world, we have to consider
where it is standard to consider ranks, i.e. are empirical cumulative distribution functions.
My point is that when I ran simulations for the survey (the idea was more to give illustrations of several techniques of estimation, rather than proofs of technical theorems) we observed that the price to pay… was negative ! I.e. the variance of the estimator of the density (wherever on the unit square) was smaller on the pseudo sample than on perfect sample .
By that time, we could not understand why we got that counter-intuitive result: even if we do know the true distribution, it is better not to use it, and to use instead a nonparametric estimator. Our interpretation was based on the discrepancy concept and was related to the latin hypercube construction:
With ranks, the data are more regular, and marginal distributions are exactlyuniform on the unit interval. So there is less variance.
This was our heuristic interpretation.
A couple of weeks ago, Christian Genest and Johan Segers proved that intuition in an article published in JMVA,
Well, we observed something for finite , but Christian and Johan obtained an analytical result. Hence, if we denote
the empirical copula in the perfect world (with known margins) and
the one constructed from the pseudo sample, they obtained that, everywhere
with nice graphs of ,
So I was very happy last week when Christian show me their results, to learn that our intuition was correct. Nevertheless, it is still a very counter-intuitive result…. If anyone has seen similar things, I’d be glad to hear about it !
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (February 15, 2011). In statistics, having too much information might not be a good thing. Freakonometrics. Retrieved December 3, 2024 from https://doi.org/10.58079/ouh0
Je ne suis pas rentré dans les détails, mais le résultat de Christian et Johan est valide sous quelques hypothèses (assez légères à mon goût) sur la copule… Mais je laisse lire le papier, je ne peux pas mâcher tout le travail non plus…