The central limit theorem is a fundamental theorem of statistics. It prescribes that the sum of a sufficiently large number of independent and identically distributed random variables approximately follows a normal distribution.
History of the Central Limit Theorem
The term “central limit theorem” most likely traces back to Georg Pólya. As he recapitulated at the beginning of a paper published in 1920, it was “generally known that the appearance of the Gaussian probability density \exp(-x^2) in a great many situations “can be explained by one and the same limit theorem” which plays “a central role in probability theory”. Laplace had discovered the essentials of this fundamental theorem in 1810, and with the designation “central limit theorem of probability theory” which was even emphasized in the paper’s title, Pólya gave it the name that has been in general use ever since.
In this paper of 1820, Laplace starts by proving the central limit theorem for some certain probability distributions. He then continues with arbitrary discrete and continuous distributions. But a more general (and rigorous) proof should be attributed to Siméon Denis Poisson. He also intuited that weaker version could easily be derived. As for Laplace, the main purpose of that Central Limit Theorem for Poisson was to be a tool in calculations, not so much to be a mathematical theorem in itself. Therefore, neither Laplace nor Poisson explicitly formulate any conditions for the theorem to hold. The mathematical formulation of the theorem is due to the St. Petersburg School of probability, from 1870 until 1910, with Chebyshev, Markov and Liapounov.
Mathematical Formulation
LetX_1,X_2,\cdots,X_n,\cdots be independent random variables that are identically distributed, with mean \mu and finite variance\sigma^2. Let
\bar{X}_n=\frac{X_1+X_2+\cdots+X_n}{n}then from the law of large numbers [\bar{X}_n-\mu] tend to 0 as n tends to infinity. The central limit theorem establishes that the distribution of \sqrt{n}[\bar{X}_n-\mu] tends to a centered normal distribution when n goes to infinity. More specificaly,
\mathbb{P}\left(\sqrt n \frac{[\bar X_n-\mu] }{\sigma }\leq x\right) \rightarrow \Phi(x)=\int_{-\infty}^x \frac{1}{\sqrt{2\pi}}\exp\left(-\frac{z^2}{2}\right)dz.We can also write
\sqrt{n}\left(\frac{\bar X_n-\mu}{\sigma}\right)\xrightarrow{\mathcal{L}}\ \mathcal{N}(0,1)or \sqrt{n}\left(\bar X_n-\mu\right)\xrightarrow{\mathcal{L}}\ \mathcal{N}(0,\sigma^2)
A limiting result as an approximation
This central limit thereom is used to approximate distributions derived from summing, or averaging, identical random variables.
Consider for instance a course where 7 students out of 8 pass. What is the probability that (at least) 4 failed in a class of 25 students. LetX be the dichotomous variable that describe failure : 1 if the student failed and 0 if he passed. That random variable has a Bernoulli distribution with parameterp=1/8, with mean1/8 and variance 7/64. Consequently, if students’ grades are independent, the sum S_n = X_1+X_2+\cdots+X_n follows a binomial distribution, with mean np and variance np(1-p), which can be approximated, by the central limit theorem, by a normal distribution with mean np and variance np(1-p). Here, \mu=3.125 while \sigma^2=2.734. To compute \mathbb P(S_n\leq 4) either either the binomial distribution, or the Gaussian approximation. In the first case, the probability is 80.47 %,
\left(\frac{7}{8}\right)^{25} + 25 \left(\frac{7}{8}\right)^{24}\left(\frac{1}{8}\right)+ \frac{25\cdot 24}{2} \left(\frac{7}{8}\right)^{23}\left(\frac{1}{8}\right)^2+\ \frac{25\cdot 24\cdot 23}{6} \left(\frac{7}{8}\right)^{22}\left(\frac{1}{8}\right)^3+ \frac{25\cdot 24\cdot 23\cdot 22}{24} \left(\frac{7}{8}\right)^{21}\left(\frac{1}{8}\right)^4In the second case, use a continuity correction, and compute the probability that S_n is less than 4+1/2. From the central limit theorem
\sqrt n \frac{[\bar X_n -\mu]}{ \sigma }= \sqrt{25}\frac{4.5/25 - 1/8}{\sqrt{7/64}}=0.8315The probability that a standard Gaussian variable is less than this quantity is
\mathbb{P}(Z\leq 0.8315)=79.72 \%which can be compared with 80.47% obtained without the approximation, see Figure 1. Note that this approximation was obtained by De Moivre, in 1713, and is usually known as « Bernoulli’s law of large numbers ».
”
Figure 1: Gaussian approximation of the binomial distribution.
Asymtptotic Confidence Intervals
The intuition is that a confidence interval is an interval in which one may be confident that a parameterof interest lies. For instance, that some quantity is measured , but the measurement is subject to a normally distributed error, with known variance \sigma^2. If X has a\mathcal{N}(\mu,\sigma^2) distribution, we know that
\mathbb P(\mu-1.96\cdot \sigma< X <\mu+1.96\cdot \sigma) = 95\%Equivalently, we could write
\mathbb P(X-1.96\cdot \sigma < \mu <X+1.96\cdot \sigma) = 95\% \mathbb P(\mu \in [X\pm 1.96\cdot \sigma])=95 \%Thus, if X is measured to be x, then the 95 % confidence interval for \mu is [x\pm 1.96\cdot \sigma].
In the context of Bernoulli trials (described above), the asyymptotic 95 % confidence interval for p is
\left[\overline{x}\pm \frac{1.96}{\sqrt{\overline{x}(1-\overline{x})}}\cdot\frac{1}{\sqrt{n}}\right]A popular rule of thumb can be derived when p~50%. In that context[p(1-p)]^{-\frac{1}{2}} is close to 1.96 (or 2), and a 95 % approximated confidence interval is then
\left[\overline{x}\pm \frac{1}{\sqrt{n}}\right]see Figure 2. If that confidence interval provides a good approximation for the 95 % confidence interval when p~50 %, it is an over-estimation when p is either much smaller, or much larger.
”
”
Figure 2: law of large numbers on the left, with the convergence of\bar X_n towards p as n increases, and central limit theorem, on the right, with the convergence of 2\sqrt{n}[\overline{X}_n-p] towards a Gaussian distribution. The red area is the 95% confidence region.
The Delta Method and Method of Moments
This method is used to approximate a general transformation of a parameter that is known to be asymptotically normal,
\sqrt{n}\left(Z_n-\mu\right)\xrightarrow{\mathcal{L}}\ \mathcal{N}(0,\sigma^2)then
\sqrt{n}\big(h(Z_n)-h(\mu)\big)\ \xrightarrow{\mathcal{L}}\ \mathcal{N}(0,\,h'(\mu)^2\cdot \sigma^2)Consider now a parametric model, … independent, with identical distributionF_{\theta} (which can be a Weibull distribution to model a duration, a Pareto distribution to model the income or the wealth, etc). The method of moments is a method of estimating parameters based on equating population and sample values of certain moments of the distribution. For instance, if \mathbb E[X]=\mu(\theta), then the estimator\widehat{\theta} of the unknown parameter is given by equation\mu(\widehat{\theta})=\overline{x} or equivalently \widehat{\theta}=\mu^{-1}(\overline{x}). From the central limit theorem
\sqrt{n}\left(\bar X_n-\mu\right)\xrightarrow{\mathcal{L}}\ \mathcal{N}(0,\sigma^2)and applying the delta-method with h=\mu^{-1}, then
\sqrt{n}\big(\widehat{\theta}-\theta\big)\ \rightarrow \mathcal{N}(0,\,h'(\mu)^2\cdot \sigma^2)where a numerical approximation for the variance can be derived.This method has a long history, and has been intensively studied. Furthermore, this asymptotic normality can be used to compute a confidence interval, and also to derive an asymptotic testing procedure.
An Asymptotic Testing Procedure
Based on that asymptotic normality, it is possible to derive a simple testing procedure. Consider a test of the hypothesis H_0:\theta=0 against H_1:\theta\neq 0, usually called a “significant” test for parameter \theta (or significance of an explanatory variance in the context of regression model). Under the assumption that H_0 is valid, then
\sqrt{n} \widehat{\theta}\ \xrightarrow{\mathcal{L}}\ \mathcal{N}(0,s^2)for some variance s^2, that can be computed using the delta method. The p-value associated with that test is
p=\mathbb{P}\left(|Z|>\left\vert\frac{\widehat{\theta}_{\text{obs}}}{s}\right\vert\right)where \widehat{\theta}_{\text{obs}} is the observed empirical estimator of the parameter and Z is a standard normal variable. Thus, the p-value can easily be computed using quantiles of the standard normal distribution. Here, the p-value is above 5% if
-1.96 < \frac{\widehat{\theta}_{\text{obs}}}{s}<1.96Weaker Forms of the Central Limit Theorem
As stated by Laplace, the Central Limit Theorem relies on strong assumption. Hopefully, most of them can be relaxed. In a first variant of the theorem, random variables have to be independent, but not necessarily identically distributed. If random variables X_i have averages \mu_i and \sigma_i^2, then \mu and \sigma^2 in the Central Limit Theorem should be replaced by averages of \mu_i and \sigma_i^2‘s, with an additional technical assumption related to the existence of some higher moments (the so called Lyapounov condition).
For a second variant of the theorem, random variables can be dependent, as in ergodic Markov chain, or in autoregressive time series. In that context, if X_1,X_2,\cdots,X_n,\cdots is a stationnary time series, with mean \mu, then define
\sigma^2=\lim_{n\rightarrow\infty} \frac{\mathbb E[S_n^2]}{n}and with that limit, the central limit theorem hold
\mathbb P\left( \sqrt n \frac{[\bar X_n - \mu]}{ \sigma }\leq x\right) \rightarrow \Phi(x)even if the variance term has here a different interpretation.
Finally, a third variant that can be mentioned is the one obtained be Paul Lévy about asymptotic properties of the empirical average, when the variance is not finite (actually, even when the first moment in not finite). In that case, the limiting distribution is no longer Gaussian.
References
Laplace, P.S. de (1810). Mémoire sur les approximations des formules qui sont fonctions de très grands nombres et sur leur application aux probabilités. Mémoires de l’Académie Royale des Sciences de Paris, 10.
Le Cam, L. (1986), The Central Limit Theorem around 1935, Statistical Science 1(1): 78-96
Polyà, G. (1920). Ueber den zentralen Grenzwertsatz der Wahrscheinlichkeitsrechnung und das Momentproblem. Mathematische Zeitschrift 8, 171–181.