In our series of posts on the history and foundations of econometric and machine learning models, a lot of references where given. Here they are.
Continue reading References on Econometrics and Machine Learning
In our series of posts on the history and foundations of econometric and machine learning models, a lot of references where given. Here they are.
Continue reading References on Econometrics and Machine Learning
This post is the fifth one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 4 is online here.
In parallel with these tools developed by, and for economists, a whole literature has been developed on similar issues, centered on the problems of prediction and forecasting. For Breiman (2001a), a first difference comes from the fact that the statistic has developed around the principle of inference (or to explain the relationship linking y to variables \mathbf{x}) while another culture is primarily interested in prediction. In a discussion that follows the article, David Cox states very clearly that in statistic (and econometrics) “predictive success (…) is not the primary basis for model choice“. We will get back here on the roots of automatic learning techniques. The important point, as we will see, is that the main concern of machine learning is related to the generalization properties of a model, i.e. its performance – according to a criterion chosen a priori – on new data, and therefore on non-sample tests.
Today, we speak of “machine learning” to describe a whole set of techniques, often computational, as alternatives to the classical econometric approach. Before characterizing them as much as possible, it should be noted that historically other names have been given. For example, Friedman (1997) proposes to make the link between statistics (which closely resemble econometric techniques – hypothesis testing, ANOVA, linear regression, logistics, GLM, etc.) and what was then called “data mining” (which then included decision trees, methods from the closest neighbours, neural networks, etc.). The bridge between those two cultures corresponds to “statistical learning” techniques described in Hastie et al (2009). But one should keep in mind that machine learning is a very large field of research.
The so-called “natural” learning (as opposed to machine learning) is that of children, who learn to speak, read and play. Learning to speak means segmenting and categorizing sounds, and associating them with meanings. A child also learns simultaneously the structure of his or her mother tongue and acquires a set of words describing the world around him or her. Several techniques are possible, ranging from rote learning, generalization, discovery, more or less supervised or autonomous learning, etc. The idea in artificial intelligence is to take inspiration from the functioning of the brain to learn, to allow “artificial” or “automatic” learning, by a machine. A first application was to teach a machine to play a game (tic-tac-toe, chess, go, etc.). An essential step is to explain the objective it must achieve to win. One historical approach has been to teach the machine the rules of the game. If it allows you to play, it will not help the machine to play well. Assuming that the machine knows the rules of the game, and that it has a choice between several dozen possible moves, which one should it choose? The classical approach in artificial intelligence uses the so-called min-max algorithm using an evaluation function: in this algorithm, the machine searches forward in the possible moves tree, as far as the calculation resources allow (about ten moves in chess, for example). Then, it calculates different criteria (which have been previously indicated to her) for all positions (number of pieces taken, or lost, occupancy of the center, etc. in our example of the chess game), and finally, the machine plays the move that allows it to maximize its gain. Another example may be the classification and recognition of images or shapes. For example, the machine must identify a number in a handwritten handwriting (checks, ZIP code on envelopes, etc). It is a question of predicting the value of a variable y, knowing that a priori y\in\{0,1,2,\cdots,8,9\}. A classical strategy is to provide the machine with learning bases, in other words here millions of labelled (identified) images of handwritten numbers. A simple (and natural) strategy is to use a decision criterion based on the closest neighbors whose labels are known (using a predefined metric).
The method of the closest neighbors (“k-nearest neighbors”) can be described as follows: we consider (as in the previous part) a set of n observations, i. e. pairs (y_i,\mathbf{x}_i) with \mathbf{x}_i\in\mathbb{R}^p. Let us consider a distance \Delta on \mathbb{R}^p (the Euclidean distance or the Mahalanobis distance, for example). Given a new observation \mathbf{x}\in\mathbb{R}^p, let us assume the ordered observations as a function of the distance between the \mathbf{x}_i and \mathbf{x}, in the sense that \Delta(\mathbf{x}_1, \mathbf{x})\leq\Delta(\mathbf{x}_2, \mathbf{x})\leq\cdots\leq\Delta(\mathbf{x}_n, \mathbf{x}) then we can consider as prediction for y the average of the nearest k neighbours,\widehat{m}_k(\mathbf{x})=\frac{1}{k}\sum_{i=1}^k y_iLearning here works by induction, based on a sample (called the learning – or training – sample).
Automatic learning includes those algorithms that give computers the ability to learn without being explicitly programmed (as Arthur Samuel defined it in 1959). The machine will then explore the data with a specific objective (such as searching for the nearest neighbours in the example just described). Tom Mitchell proposed a more precise definition in 1998: a computer program is said to learn from experience E in relation to a task T and a performance measure P, if its performance on T, measured by P, improves with experience E. Task T can be a defect score for example, and performance P can be the percentage of errors made. The system learns if the percentage of predicted defects increases with experience.
As we can see, machine learning is basically a problem of optimizing a criterion based on data (from now on called learning). Many textbooks on machine learning techniques propose algorithms, without ever mentioning any probabilistic model. In Watt et al (2016) for example, the word “probability” is mentioned only once, with this footnote that will surprise and make smile any econometricians, “the logistic regression can also be interpreted from a probabilistic perspective” (page 86). But many recent books offer a review of machine learning approaches using probabilistic theories, following the work of Vaillant and Vapnik. By proposing the paradigm of “probably almost correct” learning (PAC), a probabilistic flavor has been added to the previously very computational approach, by quantifying the error of the learning algorithm (usually in a classification problem).
To be continued (references are online here)…
This post is the fourth one of our series on the history and foundations of econometric and machine learning models. Part 3 is online here.
In the Gaussian linear model, the determination coefficient – noted R^2 – is often used as a measure of fit quality. It is based on the variance decomposition formula \underbrace{\frac{1}{n}\sum_{i=1}^n (y_i-\bar{y})^2}_{\text{total variance}}=\underbrace{\frac{1}{n}\sum_{i=1}^n (y_i-\widehat{y}_i)^2}_{\text{residual variance}}+\underbrace{\frac{1}{n}\sum_{i=1}^n (\widehat{y}_i-\bar{y})^2}_{\text{explained variance}} The R^2 is defined as the ratio of explained variance and total variance, another interpretation of the coefficient that we had introduced from the geometry of the least squares R^2= \frac{\sum_{i=1}^n (y_i-\bar{y})^2-\sum_{i=1}^n (y_i-\widehat{y}_i)^2}{\sum_{i=1}^n (y_i-\bar{y})^2}The sums of the error squares in this writing can be rewritten as a log-likelihood. However, it should be remembered that, up to one additive constant (obtained with a saturated model) in generalized linear models, deviance is defined by {Deviance}(\widehat{\beta}) = -2\log[\mathcal{L}] which can also be noted Deviance(\widehat{\mathbf{y}}). A null deviance can be defined as the one obtained without using the explanatory variables \mathbf{x}, so that \widehat{y}_i=\overline{y}. It is then possible to define, in a more general context (with a non-Gaussian distribution for y)R^2=\frac{{Deviance}(\overline{y})-{Deviance}(\widehat{\mathbf{y}})}{{Deviance}(\overline{y})}=1-\frac{{Deviance}(\widehat{\mathbf{y}})}{{Deviance}(\overline{y})}However, this measure cannot be used to choose a model, if one wishes to have a relatively simple model in the end, because it increases artificially with the addition of explanatory variables without significant effect. We will then tend to prefer the adjusted R^2,\bar R^2 = {1-(1-R^{2})\cdot{n-1 \over n-p}} = R^{2}-\underbrace{(1-R^{2})\cdot{p-1 \over n-p}}_{\text{penalty}}where p is the number of parameters of the model. Measuring the quality of fit will penalize overly complex models.
This idea will be found in the Akaike criterion, where AIC=Deviance+2\cdot p or in the Schwarz criterion, BIC=Deviance+log(n)\cdot p. In large dimensions (typically p>\sqrt{n}), we will tend to use a corrected AIC, defined by AIC_c=Deviance+2⋅p⋅n/(n-p-1) .
These criterias are used in so-called “stepwise” methods, introducing the set methods. In the “forward” method, we start by regressing to the constant, then we add one variable at a time, retaining the one that lowers the AIC criterion the most, until adding a variable increases the AIC criterion of the model. In the “backward” method, we start by regressing on all variables, then we remove one variable at a time, removing the one that lowers the AIC criterion the most, until removing a variable increases the AIC criterion from the model.
Another justification for this notion of penalty (we will come back to this idea in machine learning) can be the following. Let us consider an estimator in the class of linear predictors, \mathcal{M}=\big\lbrace m:~m(\mathbf{x})=s_h(\mathbf{x})^T\mathbf{y} \text{ where }S=(s(\mathbf{x}_1),\cdots,s(\mathbf{x}_n))^T\text{ is some smoothing matrix}\big\rbrace and assume that y=m_0 (x)+\varepsilon, with \mathbb{E}[\varepsilon]=0 and Var[\varepsilon]=\sigma^2\mathbb{I}, so that m_0 (x)=\mathbb{E}[Y|X=x] . From a theoretical point of view, the quadratic risk, associated with an estimated model \widehat{m}, \mathbb{E}\big[(Y-\widehat{m}(\mathbf{X}))^2\big], is written\mathcal{R}(\widehat{m})=\underbrace{\mathbb{E}\big[(Y-m_0(\mathbf{X}))^2\big]}_{\text{error}}+\underbrace{\mathbb{E}\big[(m_0(\mathbf {X})-\mathbb{E}[\widehat{m}(\mathbf{X})])^2\big]}_{\text{bias}^2}+\underbrace{\mathbb{E}\big[(\mathbb{E}[\widehat{m}(\mathbf{X})]-\widehat{m}(\mathbf{X}))^2\big]}_{\text{variance}} if m_0 is the true model. The first term is sometimes called “Bayes error”, and does not depend on the estimator selected, \widehat{m}.
The empirical quadratic risk, associated with a model m, is here: \widehat{\mathcal{R}}_n(m)=\frac{1}{n}\sum_{i=1}^n (y_i-m(\mathbf{x}_i))^2 (by convention). We recognize here the mean square error, “mse”, which will more generally give the “risk” of the model m when using another loss function (as we will discuss later on). It should be noted that:\displaystyle{\mathbb{E}[\widehat{\mathcal{R}}_n(m)]=\frac{1}{n}\|m_0(\mathbf{x})-m(\mathbf{x})\|^2+\frac{1}{n}\mathbb{E}\big(\|{Y}-m_0(\mathbf{X})\|^2\big)} We can show that:n\mathbb{E}\big[\widehat{\mathcal{R}}_n(\widehat{m})\big]=\mathbb{E}\big(\|Y-\widehat{m}(\mathbf{x})\|^2\big)=\|(\mathbb{I}-\mathbf{S})m_0\|^2+\sigma^2\|\mathbb{I}-\mathbf{S}\|^2so that the (real) risk of \widehat{m} is: {\mathcal{R}}_n(\widehat{m})=\mathbb{E}\big[\widehat{\mathcal{R}}_n(\widehat{m})\big]+2\frac{\sigma^2}{n}\text{trace}(\boldsymbol{S})So, if \text{trace}(\boldsymbol{S})\geq0 (which is not a too strong assumption), the empirical risk underestimates the true risk of the estimator. Actually, we recognize here the number of degrees of freedom of the model, the right-hand term corresponding to Mallow’s C_p, introduced in Mallows (1973) using not deviance but R^2.
The most traditional test in econometrics is probably the significance test, corresponding to the nullity of a coefficient in a linear regression model. Formally, it is the test of H_0:\beta_k=0 against H_1:\beta_k\neq 0. The so-called Student test, based on the statistics t_k=\widehat{\beta}_k/se_{\widehat{β}_k}, allows to decide between the two alternatives, using the test p-value, defined by \mathbb{P}[|T|>|t_k|] avec T\overset{\mathcal{L}}{\sim} Std_\nu, where \nu is the number of degrees of freedom of the model (\nu=p+1 for the standard linear model). In large dimension, however, this statistic is of very limited interest, given a significant FDR (“False Discovery Ratio”). Classically, with a level of significance \alpha=0.05, 5% of the variables are falsely significant. Suppose that we have p=100 explanatory variables, but that 5 (only) are really significant. We can hope that these 5 variables will pass the Student test, but we can also expect that 5 additional variables (false positive test) will emerge. We will then have 10 variables perceived as significant, while only half are significant, i.e. an FDR ratio of 50%. In order to avoid this recurrent pitfall in multiple tests, it is natural to use the procedure of Benjamini & Hochberg (1995).
Econometric models are used to implement public policy evaluations. It is therefore essential to fully understand the underlying mechanisms in order to know which variables actually make it possible to act on a variable of interest. But then we move on to another important dimension of econometrics. Jerry Neyman was responsible for the first work on the identification of causal mechanisms, and then Rubin (1974) formalized the test, called the “Rubin causal model” in Holland (1986). The first approaches to the notion of causality in econometrics were based on the use of instrumental variables, models with discontinuity of regression, analysis of differences in differences, and natural or unnatural experiments. Causality is usually inferred by comparing the effect of a policy – or more generally of a treatment – with its counterfactual, ideally given by a random control group. The causal effect of the treatment is then defined as \Delta=y_1-y_0, i.e. the difference between what the situation would be with treatment (noted t=1) and without treatment (noted t=0). The concern is that only y=t\cdot y_1+(1-t)\cdot y_0 and t are observed. In other words, the causal effect of variable t on t is not observed (since only one of the two potential variables – y_0 or y_1 is observed for each individual), but it is also individual, and therefore a function of x-covariates. Generally, by making assumptions about the distribution of the triplet (Y_0,Y_1,T) , some parameters of the causal effect distribution become identifiable, based on the density of the observable variables (Y,T) . Classically, we will be interested in the moments of this distribution, in particular the average effect of treatment in the population, \mathbb{E}[\Delta] , or even just the average effect of treatment in the case of treatment \mathbb{E}[\Delta|T=1] . If the result (Y_0,Y_1) is independent of the processing access variable T, it can be shown that \mathbb{E}[\Delta]=\mathbb{E}[Y|T=1]- \mathbb{E} [Y|T=0]. But if this independence hypothesis is not verified, there is a selection bias, often associated with \mathbb{E}[Y_0|T=1]- \mathbb{E} [Y_0|T=0]. Rosenbaum & Rubin (1983) propose to use a propensity to be treated score, p(x)=\mathbb{P}[T=1|X=x] , noting that if variable Y_0\ is independent of access to treatment T conditionally to the explanatory variables X, then it is independent of T conditionally to the score p(X) : it is sufficient to match them using their propensity score. Heckman et al (2003) thus proposes a kernel estimator on the propensity score, which simply provides an estimator of the effect of the treatment, provided that it is treated.
To be continued… next time, we’ll introduce “machine learning techniques” (references mentioned above are online here)