Probabilistic Foundations of Econometrics, part 4

This post is the fourth one of our series on the history and foundations of econometric and machine learning models. Part 3 is online here.

Goodness of Fit, and Model

In the Gaussian linear model, the determination coefficient – noted R^2 – is often used as a measure of fit quality. It is based on the variance decomposition formula \underbrace{\frac{1}{n}\sum_{i=1}^n (y_i-\bar{y})^2}_{\text{total variance}}=\underbrace{\frac{1}{n}\sum_{i=1}^n (y_i-\widehat{y}_i)^2}_{\text{residual variance}}+\underbrace{\frac{1}{n}\sum_{i=1}^n (\widehat{y}_i-\bar{y})^2}_{\text{explained variance}} The R^2 is defined as the ratio of explained variance and total variance, another interpretation of the coefficient that we had introduced from the geometry of the least squares R^2= \frac{\sum_{i=1}^n (y_i-\bar{y})^2-\sum_{i=1}^n (y_i-\widehat{y}_i)^2}{\sum_{i=1}^n (y_i-\bar{y})^2}The sums of the error squares in this writing can be rewritten as a log-likelihood. However, it should be remembered that, up to one additive constant (obtained with a saturated model) in generalized linear models, deviance is defined by {Deviance}(\widehat{\beta}) = -2\log[\mathcal{L}] which can also be noted Deviance(\widehat{\mathbf{y}}). A null deviance can be defined as the one obtained without using the explanatory variables \mathbf{x}, so that \widehat{y}_i=\overline{y}. It is then possible to define, in a more general context (with a non-Gaussian distribution for y)R^2=\frac{{Deviance}(\overline{y})-{Deviance}(\widehat{\mathbf{y}})}{{Deviance}(\overline{y})}=1-\frac{{Deviance}(\widehat{\mathbf{y}})}{{Deviance}(\overline{y})}However, this measure cannot be used to choose a model, if one wishes to have a relatively simple model in the end, because it increases artificially with the addition of explanatory variables without significant effect. We will then tend to prefer the adjusted R^2,\bar R^2 = {1-(1-R^{2})\cdot{n-1 \over n-p}} = R^{2}-\underbrace{(1-R^{2})\cdot{p-1 \over n-p}}_{\text{penalty}}where p is the number of parameters of the model. Measuring the quality of fit will penalize overly complex models.

This idea will be found in the Akaike criterion, where AIC=Deviance+2\cdot p or in the Schwarz criterion, BIC=Deviance+log(n)\cdot p. In large dimensions (typically p>\sqrt{n}), we will tend to use a corrected AIC, defined by AIC_c=Deviance+2⋅p⋅n/(n-p-1) .

These criterias are used in so-called “stepwise” methods, introducing the set methods. In the “forward” method, we start by regressing to the constant, then we add one variable at a time, retaining the one that lowers the AIC criterion the most, until adding a variable increases the AIC criterion of the model. In the “backward” method, we start by regressing on all variables, then we remove one variable at a time, removing the one that lowers the AIC criterion the most, until removing a variable increases the AIC criterion from the model.

Another justification for this notion of penalty (we will come back to this idea in machine learning) can be the following. Let us consider an estimator in the class of linear predictors, \mathcal{M}=\big\lbrace m:~m(\mathbf{x})=s_h(\mathbf{x})^T\mathbf{y} \text{ where }S=(s(\mathbf{x}_1),\cdots,s(\mathbf{x}_n))^T\text{ is some smoothing matrix}\big\rbrace and assume that y=m_0 (x)+\varepsilon, with \mathbb{E}[\varepsilon]=0 and Var[\varepsilon]=\sigma^2\mathbb{I}, so that m_0 (x)=\mathbb{E}[Y|X=x] . From a theoretical point of view, the quadratic risk, associated with an estimated model \widehat{m}, \mathbb{E}\big[(Y-\widehat{m}(\mathbf{X}))^2\big], is written\mathcal{R}(\widehat{m})=\underbrace{\mathbb{E}\big[(Y-m_0(\mathbf{X}))^2\big]}_{\text{error}}+\underbrace{\mathbb{E}\big[(m_0(\mathbf {X})-\mathbb{E}[\widehat{m}(\mathbf{X})])^2\big]}_{\text{bias}^2}+\underbrace{\mathbb{E}\big[(\mathbb{E}[\widehat{m}(\mathbf{X})]-\widehat{m}(\mathbf{X}))^2\big]}_{\text{variance}} if m_0 is the true model. The first term is sometimes called “Bayes error”, and does not depend on the estimator selected, \widehat{m}.

The empirical quadratic risk, associated with a model m, is here: \widehat{\mathcal{R}}_n(m)=\frac{1}{n}\sum_{i=1}^n (y_i-m(\mathbf{x}_i))^2 (by convention). We recognize here the mean square error, “mse”, which will more generally give the “risk” of the model m when using another loss function (as we will discuss later on). It should be noted that:\displaystyle{\mathbb{E}[\widehat{\mathcal{R}}_n(m)]=\frac{1}{n}\|m_0(\mathbf{x})-m(\mathbf{x})\|^2+\frac{1}{n}\mathbb{E}\big(\|{Y}-m_0(\mathbf{X})\|^2\big)} We can show that:n\mathbb{E}\big[\widehat{\mathcal{R}}_n(\widehat{m})\big]=\mathbb{E}\big(\|Y-\widehat{m}(\mathbf{x})\|^2\big)=\|(\mathbb{I}-\mathbf{S})m_0\|^2+\sigma^2\|\mathbb{I}-\mathbf{S}\|^2so that the (real) risk of \widehat{m} is: {\mathcal{R}}_n(\widehat{m})=\mathbb{E}\big[\widehat{\mathcal{R}}_n(\widehat{m})\big]+2\frac{\sigma^2}{n}\text{trace}(\boldsymbol{S})So, if \text{trace}(\boldsymbol{S})\geq0 (which is not a too strong assumption), the empirical risk underestimates the true risk of the estimator. Actually, we recognize here the number of degrees of freedom of the model, the right-hand term corresponding to Mallow’s C_p, introduced in Mallows (1973) using not deviance but R^2.

Statistical Tests

The most traditional test in econometrics is probably the significance test, corresponding to the nullity of a coefficient in a linear regression model. Formally, it is the test of H_0:\beta_k=0 against H_1:\beta_k\neq 0. The so-called Student test, based on the statistics t_k=\widehat{\beta}_k/se_{\widehat{β}_k}, allows to decide between the two alternatives, using the test p-value, defined by \mathbb{P}[|T|>|t_k|] avec T\overset{\mathcal{L}}{\sim} Std_\nu, where \nu is the number of degrees of freedom of the model (\nu=p+1 for the standard linear model). In large dimension, however, this statistic is of very limited interest, given a significant FDR (“False Discovery Ratio”). Classically, with a level of significance \alpha=0.05, 5% of the variables are falsely significant. Suppose that we have p=100 explanatory variables, but that 5 (only) are really significant. We can hope that these 5 variables will pass the Student test, but we can also expect that 5 additional variables (false positive test) will emerge. We will then have 10 variables perceived as significant, while only half are significant, i.e. an FDR ratio of 50%. In order to avoid this recurrent pitfall in multiple tests, it is natural to use the procedure of Benjamini & Hochberg (1995).

From a correlation to some causal effect

Econometric models are used to implement public policy evaluations. It is therefore essential to fully understand the underlying mechanisms in order to know which variables actually make it possible to act on a variable of interest. But then we move on to another important dimension of econometrics. Jerry Neyman was responsible for the first work on the identification of causal mechanisms, and then Rubin (1974) formalized the test, called the “Rubin causal model” in Holland (1986). The first approaches to the notion of causality in econometrics were based on the use of instrumental variables, models with discontinuity of regression, analysis of differences in differences, and natural or unnatural experiments. Causality is usually inferred by comparing the effect of a policy – or more generally of a treatment – with its counterfactual, ideally given by a random control group. The causal effect of the treatment is then defined as \Delta=y_1-y_0, i.e. the difference between what the situation would be with treatment (noted t=1) and without treatment (noted t=0). The concern is that only y=t\cdot y_1+(1-t)\cdot y_0 and t are observed. In other words, the causal effect of variable t  on t  is not observed (since only one of the two potential variables – y_0 or y_1  is observed for each individual), but it is also individual, and therefore a function of x-covariates. Generally, by making assumptions about the distribution of the triplet (Y_0,Y_1,T) , some parameters of the causal effect distribution become identifiable, based on the density of the observable variables (Y,T) . Classically, we will be interested in the moments of this distribution, in particular the average effect of treatment in the population, \mathbb{E}[\Delta] , or even just the average effect of treatment in the case of treatment \mathbb{E}[\Delta|T=1] . If the result (Y_0,Y_1) is independent of the processing access variable T, it can be shown that \mathbb{E}[\Delta]=\mathbb{E}[Y|T=1]- \mathbb{E} [Y|T=0]. But if this independence hypothesis is not verified, there is a selection bias, often associated with \mathbb{E}[Y_0|T=1]- \mathbb{E} [Y_0|T=0]. Rosenbaum & Rubin (1983) propose to use a propensity to be treated score, p(x)=\mathbb{P}[T=1|X=x] , noting that if variable Y_0\ is independent of access to treatment T conditionally to the explanatory variables X, then it is independent of T  conditionally to the score p(X) : it is sufficient to match them using their propensity score. Heckman et al (2003) thus proposes a kernel estimator on the propensity score, which simply provides an estimator of the effect of the treatment, provided that it is treated.

To be continued next time, we’ll introduce “machine learning techniques” (references mentioned above are online here)


OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (February 5, 2019). Probabilistic Foundations of Econometrics, part 4. Freakonometrics. Retrieved October 10, 2024 from https://doi.org/10.58079/ovco


One thought on “Probabilistic Foundations of Econometrics, part 4”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.