Category Archives: econometrics

Foundations of Machine Learning, part 5

This post is the nineth (and probably last) one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 8 is online here.

Optimization and algorithmic aspects

In econometrics, (numerical) optimization became omnipresent as soon as we left the Gaussian model. We briefly mentioned it in the section on the exponential family, and the use of the Fisher score (gradient descent) to solve the first order condition \mathbf{X}^T W(\beta)^{-1})[y-\widehat{y}]=\mathbf{0}. In learning, optimization is the central tool. And it is necessary to have effective optimization algorithms, to solve problems (described previously) of the form: \widehat{\beta}\in\underset{\beta\in\mathbb{R}^p}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta)+\lambda\Vert\boldsymbol{\beta}\Vert\right\rbraceIn some cases, instead of global optimization, it is sufficient to consider optimization by coordinates (widely studied in Daubechies et al. (2004)). If f:\mathbb{R}^d\rightarrow\mathbf{R} is convex and differentiable, if \mathbf{x} satisfies f(\mathbf{x}+h\boldsymbol{e}_i)\geq f(\mathbf{x}) for any h>0 and i\in\{1,\cdots, d\}then f(\mathbf{x})=\min\{f\}, where \mathbf{e}=(\mathbf{e}_i) is the canonical basis of \mathbb{R}^d. However, this property is not true in the non-differentiable case. But if we assume that the non-differentiable part is separable (additively), it becomes true again. More specifically, iff(\mathbf{x})=g(\mathbf{x})+\sum_{i=1}^d h_i(x_i)with\left\lbrace\begin{array}{l}g: \mathbb{R}^d\rightarrow\mathbb{R}\text{ convex-differentiable}\\h_i: \mathbb{R}\rightarrow\mathbb{R}\text{ convex}\end{array}\right.This was the case for Lasso regression, \beta)\mapsto\| \mathbf{y}-\beta_0-\mathbf{X}\beta\|_{\ell_2 }+\lambda\|\beta\|_{\ell_1}, as shown by Tsen (2001). Getting back to our initial notations, we can use a coordinate descent algorithm: from an initial value \mathbf{x}^{(0)}, we consider (by iterating)x_j^{(k)}\in\text{argmin}\big\lbrace f(x_1^{(k)},\cdots,x_{k-1}^{(k)},x_k,x_{k+1}^{(k-1)},\cdots,x_n^{(k-1)})\big\rbrace for j=1,2,\cdots,nThese algorithmic problems and numerical issues may seem secondary to econometricians. However, they are essential in automatic learning: a technique is interesting if there is a stable and fast algorithm, which allows to obtain a solution. These optimization techniques can be transposed: for example, this coordinate descent technique can be used in the case of SVM methods (known as “vector support” methods) when the space is not linearly separable, and the classification error must be penalized (we will come back to this technique in the next section).

In-sample, out-of-sample and cross-validation

These techniques seem intellectually interesting, but we have not yet discussed the choice of the penalty parameter \lambda. But this problem is actually more general, because comparing two parameters \widehat{\beta}_{\lambda_1} and \widehat{\beta}_{\lambda_2} is actually comparing two models. In particular, if we use a Lasso method, with different thresholds \lambda, we compare models that do not have the same dimension. Previously, we have addressed the problem of model comparison from an econometric perspective (by penalizing overly complex models). In the learning literature, judging the quality of a model on the data used to construct it does not make it possible to know how the model will behave on new data. This is the so-called “generalization” problem. The traditional approach then consists in separating the sample (size n) into two parts: a part that will be used to train the model (the training database, in-sample, size m) and a part that will be used to test the model (the testing database, out-of-sample, size n-m). The latter then makes it possible to measure a real predictive risk. Suppose that the data are generated by a linear model y_i=\mathbf{x}_i^T \beta_0+\varepsilon_i where \varepsilon_i are independent and centred law achievements. The empirical quadratic risk in-sample is here\frac{1}{m}\sum_{i=1}^m\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big)=\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big),for any observation i. Assuming the residuals \varepsilon Gaussian, then we can show that this risk is worth \sigma^2 \text{trace} (\Pi_X)/m is \sigma^2 p/m. On the other hand, the empirical out-of-sample quadratic risk is here \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) where \mathbf{x} is a new observation, independent of the others. It can be noted that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big\vert \mathbf{x}\big)=\text{Var}\big(\mathbf{x}^T \widehat{\beta}\big\vert \mathbf{x}\big)=\sigma^2\mathbf{x}^T(\mathbf{x}^T\mathbf{x})^{-1}\mathbf{x},and by integrating with respect to \mathbf{x}, \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T\beta_0]^2\big)=\sigma^2\text{trace}\big(\mathbb{E}[\mathbf{x}\mathbf{x}^T]\mathbb{E}\big[(\mathbf{x}^T\mathbf{x})^{-1}\big]\big).The expression is then different from that obtained in-sample, and using the Groves & Rothenberg (1969) increase, we can show that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) \geq \sigma^2\frac{p}{m}which is pretty intuitive, when we start thinking about it. Except in some simple cases, there is no simple (explicit) formula. Note, however, that if \mathbf{X}\sim\mathcal{N}(0,\sigma^2 \mathbb{I}), then \mathbf{x}^T \mathbf{x} follows a Wishart law, and it can be shown that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big)=\sigma^2\frac{p}{m-p-1}.If we now look at the empirical version: if \widehat{\beta} is estimated on the first m observations,\widehat{\mathcal{R}}^{~\text{ IS}}=\sum_{i=1}^m [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2\text{ and }\widehat{\mathcal{R}}^{\text{ OS}}=\sum_{i=m+1}^{n} [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2and as Leeb (2008) noted, \widehat{\mathcal{R}}^{\text{IS}}-\widehat{\mathcal{R}}^{\text{OS}}\approx 2\cdot\nu where \nu represents the number of degrees of freedom, which is not unlike the penalty used in the Akaike test.

Figure 4 shows the respective evolution of \widehat{\mathcal{R}}^{\text{IS}} and \widehat{\mathcal{R}}^{\text{OS}} according to the complexity of the model (number of degrees in a polynomial regression, number of nodes in splines, etc). The more complex the model, the more \widehat{\mathcal{R}}^{\text{IS}} will decrease (this is the red curve, below). But that’s not what we’re interested in here: we want a model that predicts well on new data (i. e. out-of-sample). As Figure 4 shows, if the model is too simple, it does not predict well (as it does with in-sample data). But what we can see is that if the model is too complex, we are in a situation of “overlearning”: the model will start to model the noise. Of course, this figure should remind us of the one we’ve seen in our second post of that series

Figure 4 : Generalization, under- and over-fitting

Instead of splitting the database in two, with some of the data that will be used to calibrate the model and some to study its performance, it is also possible to use cross-validation. To present the general idea, we can go back to the “jackknife”, introduced by Quenouille (1949) (and formalized by Quenouille (1956) and Tukey (1958)) relatively used in statistics to reduce bias. Indeed, if we assume that \{y_1,\cdots,y_n\} is a sample drawn according to a law F_\theta, and that we have an estimator T_n (\mathbf{y})=T_n (y_1,\cdots,y_n), but that this estimator is biased, with \mathbf{E}[T_n (\mathbf{Y})]=\theta+O(n^{-1}), it is possible to reduce the bias by considering \widetilde{T}_n(\mathbf{y})=\frac{1}{n}\sum_{i=1}^n T_{n-1}(\mathbf{y}_{(i)})\text{ where }\mathbf{y}_{(i)}=(y_1,\cdots,y_{i-1},y_{i+1},\cdots,y_n)It can then be shown that \mathbb{E}[\tilde{T}_n(Y)]=\theta+O(n^{-2})The idea of cross-validation is based on the idea of building an estimator by removing an observation. Since we want to build a predictive model, we will compare the forecast obtained with the estimated model, and the missing observation\widehat{\mathcal{R}}^{\text{ CV}}=\frac{1}{n}\sum_{i=1}^n \ell(y_i,\widehat{m}_{(i)}(\mathbf{x}_i))We will speak here of the “leave-one-out” (loocv) method.

This technique reminds us of the traditional method used to find the optimal parameter in exponential smoothing methods for time series. In simple smoothing, we will construct a forecast from a time series as {}_t\widehat{y}_{t+1} =\alpha\cdot{}_{t-1}\widehat{y}_t +(1-\alpha)\cdot y_t, where \alpha\in[0,1], and we will consider as “optimal” \alpha^\star = \underset{\alpha\in[0,1]}{\text{argmin}}\left\lbrace \sum_{t=2}^T \ell({}_{t-1}\widehat{y}_{t},y_{t}) \right\rbraceas described by Hyndman et al (2009).

The main problem with the leave-one-out method is that it requires calibration of n models, which can be problematic in large dimensions. An alternative method is cross validation by k-blocks (called “k-fold cross validation”) which consists in using a partition of \{1,\cdots,n\} in k groups (or blocks) of the same size, \mathcal{I}_1,\cdots,\mathcal{I}_k, and let us note \mathcal{I}_{\bar j}=\{1,\cdots,n\}\setminus \mathcal{I}_j. By noting \widehat{m}_{(j)} built on the sample \mathcal{I}_{\bar j}, we then set:\widehat{\mathcal{R}}^{k-\text{ CV}}=\frac{1}{k}\sum_{j=1}^k \mathcal{R}_j\text{ where }\mathcal{R}_j=\frac{k}{n}\sum_{i\in\mathcal{I}_{{j}}} \ell(y_i,\widehat{m}_{(j)}(\mathbf{x}_i))Standard cross-validation, where only one observation is removed each time (loocv), is a special case, with k=n. Using k=5 or 10 has a double advantage over k=n: (1) the number of estimates to be made is much smaller, 5 or 10 rather than n; (2) the samples used for estimation are less similar and therefore less correlated to each other, which tends to avoid excess variance, as recalled by James et al. (2013).

Another alternative is to use boosted samples. Let \mathcal{I}_b be a sample of size n obtained by drawing with replacement in \{1,\cdots,n\} to know which observations (y_i,\mathbf{x}_i) will be kept in the learning population (at each draw). Note \mathcal{I}_{\bar b}=\{1,\cdots,n\}\setminus\mathcal{I}_b. By noting \widehat{m}_{(b)} built on sample \mathcal{I}_b, we then set :\widehat{\mathcal{R}}^{\text{ B}}=\frac{1}{B}\sum_{b=1}^B \mathcal{R}_b\text{ where }\mathcal{R}_b=\frac{n_{\overline{b}}}{n}\sum_{i\in\mathcal{I}_{\overline{b}}} \ell(y_i,\widehat{m}_{(b)}(\mathbf{x}_i))where n_{\bar b} is the number of observations that have not been kept in \mathcal{I}_b. It should be noted that with this technique, on average e^{-1}\sim36.7\% of the observations do not appear in the boosted sample, and we find an order of magnitude of the proportions used when creating a calibration sample, and a test sample. In fact, as Stone (1977) had shown, the minimization of AIC is to be compared to the cross-validation criterion, and Shao (1997) showed that the minimization of BIC corresponds to k-fold cross-validation, with k=n/\log n.

All those techniques here are mentioned in the “machine learning” section since they rely on automatic, computational techniques, and no probabilistic foundations are necessary. In many cases we did use the notation m^\star (at least in the first posts on “machine learning” techniques) to highlight the fact that we want some sort of “optimal” model – and to make a distinction with estimators \widehat{m} considered earlier, when we had some probabilistic framework. But of course, it is possible (and necessary) to build bridges between those two cultures…

References are online here. As explained in the introduction, it is some sort of online version of an introduction to our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics (in English and in French).

Foundations of Machine Learning, part 4

This post is the eighth one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 7 is online here.

Penalization and variables selection

One important concept in econometrics is Ockham’s razor – also known as the law of parsimony (lex parsimoniae) – which can be related to abductive reasoning.

Akaike’s criterion was based on a penalty of likelihood taking into account the complexity of the model (the number of explanatory variables retained). If in econometrics, it is customary to maximize the likelihood (to build an asymptotically unbiased estimator), and to judge the quality of the ex-post model by penalizing the likelihood, the strategy here will be to penalize ex-ante in the objective function, even if it means building a biased estimator. Typically, we will build: (\widehat{\beta}_{0,\lambda},\widehat{\beta}_{\lambda})=\text{argmin}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta)+\lambda \text{ penalization}( \boldsymbol{\beta})\right\rbrace, ~~~(11)where the penalty function will often be a norm \|\cdot\| chosen a priori, and a penalty parameter \lambda (we find in a way the distinction between AIC and BIC if the penalty function is the complexity of the model – the number of explanatory variables retained). In the case of the \ell_2 norm, we find the ridge estimator, and for the \ell_1 norm, we find the lasso estimator (“Least Absolute Shrinkage and Selection Operator”). The penalty previously used involved the number of degrees of freedom of the model, so it may seem surprising to use \|\beta\|_{\ell_2} as in the ridge regression. However, we can envisage a Bayesian vision of this penalty. It should be recalled that in a Bayesian model : \underbrace{\mathbb{P}[\boldsymbol{\theta}\vert\boldsymbol{y}]}_{\text{posterior}} \propto \underbrace{\mathbb{P}[\boldsymbol{y}\vert\boldsymbol{\theta}]}_{\text{likelihood}} \cdot \underbrace{\mathbb{P}[\boldsymbol{\theta}]}_{\text{prior}} or\log\mathbb{P}[\boldsymbol{\theta}\vert\boldsymbol{y}]= \underbrace{\log \mathbb{P}[\boldsymbol{y}\vert\boldsymbol{\theta}]}_{\text{log likelihood}} + \underbrace{\log\mathbb{P}[\boldsymbol{\theta}]}_{\text{{penalty}}}In a Gaussian linear model, if we assume that the a priori law of \theta follows a centred Gaussian distribution, we find a penalty based on a quadratic form of the components of \theta.

Before going back in detail to these two estimators, obtained using the \ell_1 or \ell_2 norm, let us return for a moment to a very similar problem: the best choice of explanatory variables. Classically (and this will be even more true in large dimension), we can have a large number of explanatory variables, p, but many are just noise, in the sense that \beta_j=0 for a large number of j. Let s be the number of (really) relevant covariates, s=\#S, with S=\{j=1,\cdots,p:\beta_j\neq 0\}. If we note \mathbf{X}_S the matrix composed of the relevant variables (in columns), then we assume that the real model is of the form y=\mathbf{x}_S^T \beta_S+\varepsilon. Intuitively, an interesting estimator would then be \widehat{\beta}_S=[\mathbf{X}_S^T \mathbf{X}_S ]^{-1} \mathbf{X}_S^T \mathbf{y}, but this estimator is only theoretical because the set S is unknown, here. This estimator can actually be seen as the oracle estimator mentioned above. One may then be tempted to solve (\widehat{\beta}_{0,s},\widehat{\beta}_{s})=\underset{\beta_S\in\mathbb{R}^s}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta_S)\right\rbrace,\text{ s.t. } \# {S}=s This problem was introduced by Foster & George (1994) using the \ell_0 notation. More precisely, let us define here the following three norms, where \mathbf{a}\in\mathbb{R}^d, \Vert\boldsymbol{a} \Vert_{\ell_0}=\sum_{i=1}^d \mathbf{1}(a_i\neq 0), ~~ \Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|~~\text{ and }~~\Vert\mathbf{a} \Vert_{\ell_2}=\left(\sum_{i=1}^d a_i^2\right)^{1/2}

Table 1: Constrained optimization and regularization.

Let us consider the optimization problems in Table 1. If we consider the classical problem where the quadratic norm is used for \ell, the two problems of the equation (\ell1) of Table 1 are equivalent, in the sense that, for any solution (\beta^\star,s) to the left problem, there is \lambda^\star such that (\beta^\star,\lambda^\star) is the solution of the right problem; and vice versa. The result is also true for problems(\ell2). These are indeed convex problems. On the other hand, the two problems (\ell0) are not equivalent: if for (\beta^\star,\lambda^\star) solution of the right problem, there is s^\star such that \beta^\star is solution of the left problem, the reverse is not true. More generally, if you want to use an \ell_p norm, sparsity is obtained if p\leq 1 whereas you need p\geq1 to have the convexity of the optimization program.

One may be tempted to resolve the penalized program (\ell0) directly, as suggested by Foster & George (1994). Numerically, it is a complex combinatorial problem in large dimension (Natarajan (1995) notes that it is a NP-difficult problem), but it is possible to show that if \lambda\sim\sigma^2 \log(p), then \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) \leq \underbrace{\mathbb{E}\big(\mathbf{x}_{ {S}}^T\widehat{\beta}_{{S}}-\mathbf{x}^T \beta_0]^2\big)}_{=\sigma^2 \#{S}}\cdot \big(4\log p+2+o(1)\big) Observe that in this case \widehat{\beta}_{\lambda,j}^{\text{sub}} = \left\lbrace\begin{array}{l}0 \text{ if } j\notin{S}_\lambda(\beta)\\ \widehat{\beta}_{j}^{\text{ols}} \text{ if } j\in{S}_\lambda(\beta),\end{array}\right. where S_\lambda (\beta) refers to all non-zero coordinates when solving (\ell0).

The problem (\ell2) is strictly convex if \ell is the quadratic norm, in other words, the Ridge estimator is always well defined, with in addition an explicit form for the estimator, \widehat{ {\beta}}_\lambda^{\text{ ridge}}=(\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I})^{-1}\mathbf{X}^T\mathbf{y}=(\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I})^{-1}(\mathbf{X}^T\mathbf{X})\widehat{ {\beta}}^{\text{ ols}} Therefore, it can be deduced that \text{bias}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=-\lambda[\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I}]^{-1}~\widehat{ {\beta}}^{\text{ ols}} and\text{Var}[\widehat{\beta}_\lambda^{\text{ ridge}}]=\sigma^2[\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I}]^{-1}\mathbf{X}^T\mathbf{X}[\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I}]^{-1}With a matrix of orthonormal explanatory variables (i.e. \mathbf{X}^T \mathbf{X}=\mathbb{I}), the expressions can be simplified \text{bias}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=\frac{\lambda}{1+\lambda}~\widehat{ {\beta}}^{\text{ ols}}\text{ and }\text{Var}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=\frac{\sigma^2}{(1+\lambda)^2}\mathbb{I} Observe that \text{Var}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]<\text{Var}[\widehat{ {\beta}}^{\text{ ols}}]. And because  \text{mse}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=\frac{p\sigma^2}{(1+\lambda)^2}+\frac{\lambda^2}{(1+\lambda)^2}\beta^T\beta we obtain an optimal value for \lambda: \lambda^\star=k\sigma^2/\beta^T\beta

On the other hand, if \ell is no longer the quadratic norm but the \ell_1 norm, the problem (\ell1) is not always strictly convex, and in particular, the optimum is not always unique (for example if \mathbf{X}^T \mathbf{X} is singular). But if it is strictly convex, then predictions \mathbf{X}\beta will be unique. It should also be noted that two solutions are necessarily consistent in terms of sign of coefficients: it is not possible to have \beta_j<0 for one solution and \beta_j>0 for another. From a heuristic point of view, the program (\ell1) is interesting because it allows to obtain in many cases a corner solution, which corresponds to a problem resolution of type (\ell0) – as shown visually on Figure 2.

Figure 2 : Penalization based on norms \ell_0, \ell_1 and \ell_2 (from Hastie et al. (2016)).

Let us consider a very simple model: y_i=x_i \beta+\varepsilon, with a penalty \ell_1 and a loss function \ell_2. The problem (\ell1) then becomes  \min\big\{\mathbf{y}^T\mathbf{y}-2\mathbf{y}^T\mathbf{x}\beta+\beta\mathbf{x}^T\mathbf{x}\beta+2\lambda|\beta|\big\} The first order condition is then -2\mathbf{y}^T\mathbf{x} + 2\mathbf{x}^T\mathbf{x}\widehat{\beta}\pm 2\lambda=0And the sign of the last term depends on the sign of \beta. Suppose that the least square estimator (obtained by setting \lambda=0) is (strictly) positive, i. e. \mathbf{y}^T \mathbf{x}>0. If \lambda is not too big, we can imagine that \beta is of the same sign as \widehat{\beta}^{\text{mco}}, and therefore the condition becomes -2\mathbf{y}^T \mathbf{x}+2\mathbf{x}^T \mathbf{x}\beta+2\lambda=0, and the solution is \widehat{\beta}_{\lambda}^{\text{ lasso}}=\frac{\mathbf{y}^T\mathbf{x}-\lambda}{\mathbf{x}^T\mathbf{x}} By increasing \lambda, there will be a time such that \widehat{\beta}_λ=0. If we increase \lambda a bit little more, \widehat{\beta}_λ does not become negative because in this case the last term of the first order condition changes, and in this case we try to solve -2\mathbf{y}^T\mathbf{x} + 2\mathbf{x}^T\mathbf{x}\widehat{\beta}- 2\lambda=0 whose solution is then \widehat{\beta}_{\lambda}^{\text{ lasso}}=\frac{\mathbf{y}^T\mathbf{x}+\lambda}{\mathbf{x}^T\mathbf{x}}But this solution is positive (we assumed \mathbf{y}^T \mathbf{x}>0), and so it is possible to have \widehat{\beta}_\lambda <0at the same time. Also, after a while, \widehat{\beta}_\lambda=0, which is then a corner solution. Things are of course more complicated in larger dimensions (Tibshirani & Wasserman (2016) goes back at length on the geometry of the solutions) but as Candès & Plan (2009) notes, under minimal assumptions guaranteeing that the predictors are not strongly correlated, the Lasso obtains a quadratic error almost as good as if we had an oracle providing perfect information on the set of \beta_j‘s that are not zero. With some additional technical hypotheses, it can be shown that this estimator is “sparsistant” in the sense that the support of \widehat{\beta}_\lambda^{\text{lasso}} is that of \beta, in other words Lasso has made it possible to select variables (more discussions on this point can be obtained in Hastie et al. (2016)).

More generally, it can be shown that \widehat{\beta}_\lambda^{\text{lasso}} is a biased estimator, but may be of sufficiently low variance that the mean square error is lower than using least squares. To compare the three techniques, relative to the least square estimator (obtained when \lambda=0), if we assume that the explanatory variables are orthonormal, then \widehat{\beta}_{\lambda,j}^{\text{ subset}}=\widehat{\beta}_{j}^{\text{ ols}}\boldsymbol{1}_{|\widehat{\beta}_{\lambda,j}^{\text{ subset}}|>b}, ~~\widehat{\beta}_{\lambda,j}^{\text{ ridge}}=\frac{\widehat{\beta}_{j}^{\text{ ols}}}{1+\lambda}and\widehat{\beta}_{\lambda,j}^{\text{ lasso}}=\text{sign}[\widehat{\beta}_{j}^{\text{ ols}}]\cdot(|\widehat{\beta}_{j}^{\text{ ols}}|-\lambda)_+

Figure 3 : Penalization based on norms ,  and  (from Hastie et al. (2016)).

To be continued with probably a final post this week (references are online here)…

Foundations of Machine Learning, part 3

This post is the seventh one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 6 is online here.

Boosting and sequential learning

As we have seen before, modelling here is based on solving an optimization problem, and solving the problem described by equation (6) is all the more complex because the functional space \mathcal{M} is large. The idea of boosting, as introduced by Shapire & Freund (2012), is to learn, slowly, from the errors of the model, in an iterative way. In the first step, we estimate a model m_1 for y, from \mathbf{X}, which will give an error \varepsilon_1. In the second step, we estimate a model m_2 for \varepsilon_1, from X, which will give an error \varepsilon_2, etc. We will then retain as a model, after k iterations m^{(k)}(\cdot)=\underbrace{m_1(\cdot)}_{\sim y}+\underbrace{m_2(\cdot)}_{\sim \epsilon_1}+\underbrace{m_3(\cdot)}_{\sim \epsilon_2}+\cdots+\underbrace{m_k(\cdot)}_{\sim \epsilon_{k-1}}=m^{(k-1)}(\cdot)+m_k(\cdot)~~~(7)Here, the error \varepsilon is seen as the difference between y and the model m(\mathbf{x}), but it can also be seen as the gradient associated with the quadratic loss function. Formally, \varepsilon can be seen as \nabla\ell in a more general context (here we find an interpretation that reminds us of residuals in generalized linear models).

Equation (7) can be seen as a descent of the gradient, but written in a dual way. The problem will then be rewritten as an optimization problem: m^{(k)}=m^{(k-1)}+\underset{h\in\mathcal{H}}{\text{argmin}}\left\lbrace \sum_{i=1}^n \ell(\underbrace{y_i-m^{(k-1)}(\boldsymbol{x}_i)}_{\varepsilon_{k,i}},h(\boldsymbol{x}_i))\right\rbrace~~~(8)where the trick is to consider a relatively simple space \mathcal{H} (we will speak of “weak learner”). Classically, \mathcal{H} functions are step-functions (which will be found in classification and regression trees) called “stumps”. To ensure that learning is indeed slow, it is not uncommon to use a shrinkage parameter, and instead of setting, for example, \varepsilon_1=y-m_1 (\mathbf{x}), we will set \varepsilon_1=y-\alpha\cdot m_1 (\mathbf{x}) with \alpha\in[0.1]. It should be noted that it is because a non-linear space is used for \mathcal{H}, and learning is slow, that this algorithm works well. In the case of the Gaussian linear model, remember that the residuals \varepsilon=y-\mathbf{x}^T\beta are orthogonal to the explanatory variables, \mathbf{X}, and it is then impossible to learn from our errors. The main difficulty is to stop in time, because after too many iterations, it is no longer the m function that is approximated, but the noise. This problem is called overlearning.

This presentation has the advantage of having a heuristic reminiscent of an econometric model, by iteratively modelling the residuals by a (very) simple model. But this is often not the presentation used in the learning literature, which places more emphasis on an optimization algorithm heuristic (and gradient approximation). The function is learned iteratively, starting from a constant value, m^{(0)}=\underset{m\in\mathbb{R}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,m)\right\rbracethen we consider the following learning procedure{\displaystyle m^{(k)}=m^{(k-1)}+{\underset{h\in {\mathcal {H}}}{\text{argmin}}}\sum _{i=1}^{n}\ell(y_{i},m^{(k-1)}(\mathbf{x}_{i})+h(\mathbf{x}_{i}))}~~~(9)which can be written, if \mathcal{H} is a set of differentiable functions, {\displaystyle m^{(k)}=m^{(k-1)}-\gamma_{k}\sum _{i=1}^{n}\nabla _{m^{(k-1)}}\ell(y_{i},m^{(k-1)}(\mathbf{x}_{i})),} where {\displaystyle \gamma _{k}=\underset{\gamma }{\text{argmin }}\sum _{i=1}^{n}\ell\left(y_{i},m^{(k-1)}( \mathbf{x}_{i})-\gamma \nabla _{m^{(k-1)}}\ell(y_{i},m^{(k-1)}( \mathbf{x}_{i}))\right).} To better understand the relationship with the approach described above, at step k, pseudo-residuals are defined by setting r_{i,k}=-\left.\frac{\partial \ell(y_i,m(\mathbf{x}_i))}{\partial m(\mathbf{x}_i)}\right\vert_{m(\mathbf{x})=m^{(k-1)}( \mathbf{x})}\text{ where }i=1,\cdots,nA simple model is then sought to explain these pseudo-residuals according to the explanatory variables \mathbf{x}_i, i.e. r_{i,k}=h^\star(\mathbf{x}_i) , where h^\star\in\mathcal{H}. In a second step, we look for an optimal multiplier by solving\gamma_k = \underset{\gamma\in\mathbb{R}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,m^{(k-1)}( \mathbf{x}_i)+\gamma h^\star(\mathbf{x}_i))\right\rbrace then update the model by setting m_k (\cdot)=m_(k-1) (\cdot)+\gamma_k h^\star (\cdot) . More formally, we move from equation (8) – which clearly shows that we are building a model on residuals – to equation (9) – which will then be translated as a gradient calculation problem – noting that \ell(y,m+h)=\ell(y-m,h) . Classically, class \mathcal{H} of functions consists in regression trees. It is also possible to use a form of penalty by setting m_k (\cdot)=m_(k-1) (\cdot)+\nu\gamma_k h^\star (\cdot) , with \nu\in(0,1) . But let’s go back  a little further – in our next post – on the importance of penalization before discussing the numerical aspects of optimization.

To be continued (keep in mind that references are online here)…

Foundations of Machine Learning, part 2

This post is the sixth one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 5 is online here.

The probabilistic formalism in the 80’s

We have a training sample, with observations (\mathbf{x}_i,y_i) where the variables y are in a set \mathcal{Y}. In the case of classification, \mathcal{Y}=\{-1,+1\}, but a relatively general set can be considered (note that if econometricians prefer \mathcal{Y}=\{0,1\} – because of the Bernoulli distribution and because 0 and 1 are lower and upper bounds of probabilities, people in the “machine learning” community prefer \mathcal{Y}=\{-1,+1\}). A predictor m is an function taking values in \mathcal{Y}, used to label (or classify) future new observations, using some features that lie in a set \mathcal{X}. It is assumed that the labels are produced by an (unknown) classifier f called target. For a statistician, this function would be the real model. Naturally, we want to build m as close as possible to f. Let \mathbb{P} be a (unknown) distribution on \mathcal{X}. The error of m with respect to target f is defined by \mathcal{R}_{\mathbb{P},f}(m)=\mathbb{P}[m(\boldsymbol{X})\neq f(\boldsymbol{X})]\text{ where }\boldsymbol{X}\sim\mathbb{P}or equivalently,\mathcal{R}_{\mathbb{P},f}(m)=\mathbb{P}\big[\{\boldsymbol{x}\in\mathcal{X}:m(\boldsymbol{x})\neq f(\boldsymbol{x})\}\big]To obtain our “optimal” classifier, it becomes necessary to assume that there is a link between the data in our sample and the pair (\mathbb{P},f) , i.e. a data generation model. We will then assume that the \mathbf{x}_i are obtained by independent draws according to \mathbb{P}, and that then y_i=f(\mathbf{x}_i) . We can define the empirical risk of a classifier m, as \widehat{{R}}(m)=\frac{1}{n}\sum_{i=1}^n \boldsymbol{1}(m(\boldsymbol{x}_i)\neq y_i)

It is important to recognize that a perfect model cannot be found, in the sense that R_{\mathbb{P},f} (m)=0. Indeed, if we consider the simplest case, with \mathcal{X}=\{x_1,x_2\} and \mathbb{P} is such that \mathbb{P}(\{x_1\})=p and \mathbb{P}(\{x_2\})=1-p. The probability of never observing \{x_2\} among the n observations is (1-p)^n, and if p<1/n, it is quite likely never to observe \{x_2\} so it can never be predicted. We cannot therefore hope to have a zero risk whatever \mathbb{P}. And more generally, it is also possible to observe \{x_1\} and \{x_2\}, and despite everything, to make mistakes on the labels. Also, instead of looking for a perfect model, we can try to have an “approximately correct” model. We will then try to find m such that R_{\mathbb{P},f} (m)\leq\varepsilon, where \varepsilon is an a priori specified threshold. But even this condition is too strong, and cannot be fulfilled. Thus, we will usually as to have R_{\mathbb{P},f} (m)\leq\varepsilon with some probability 1-\delta. Hence, we will try to be “probably approximately correct” (PAC), allowing to make a mistake with a probability \delta, again fixed a priori.

Also, when we build a classifier, we do not know either \mathbb{P} or f, but we give ourselves a precision criterion \varepsilon , and a confidence parameter \delta, and we have n observations. Note that n, \varepsilon and \delta can be linked. We then look for a model m such that R_{\mathbb{P},f} (m)\leq\varepsilon with probability (at least) 1-\delta, so that we are probably approximately correct. Wolpert (1996) has shown (see details in Wolpert & Macready (1997)) that there is no universal learning algorithm. In particular, it can be shown that there is \mathbb{P} such that R_{\mathbb{P},f} (m) is relatively high, with a relatively high probability (also).

The interpretation is that since we cannot learn (in the PAC sense) about all the functions m, we will then force m to belong to a particular class, noted \mathcal{M}. Let us suppose, to start with, that \mathcal{M} contains a finite number of possible models. We can then show that for all \varepsilon and \delta, that for all \mathbb{P} and f, if we have enough observations (more precisely n\geq \varepsilon^{-1} \log[\delta^{-1} |\mathcal{M}|], then with a greater probability than 1-\delta, R_{\mathbb{P},f} (m^\star)\leq\varepsilon where m^\star \in \underset{m\in\mathcal{M}}{\text{argmin}}\Big\lbrace\frac{1}{n}\sum_{i=1}^n \boldsymbol{1}(m(\boldsymbol{x}_i)\neq y_i)\Big\rbracein other words m^\star is a model in \mathcal{M} that minimizes empirical risk.

We can go a little further, staying in the case where \mathcal{Y}=\{-1,+1\}. An \mathcal{M} class of classifiers will be called PAC-learnable if there is n_M:[0,1]^2\rightarrow \mathbb{N} such that, for all \varepsilon, \delta, \mathbb{P} and if it is assumed that the target f belongs to \mathcal{M}, then using n>n_M (\varepsilon,\delta) observations \mathbf{x}_i drawn from \mathbb{P}, labelled y_i by f, then there is m\in\mathcal{M} such that, with probability 1-\delta, R_{\mathbb{P},f} (m)\leq\varepsilon. The n_M function is then called “sample complexity to learn”. In particular, we have seen that if M contains a finite number of classifiers, then \mathcal{M} is PAC-learnable with complexity n_M (\varepsilon,\delta)=\varepsilon^{-1} \log[\delta^{-1} |M|].

Naturally, we would like to have a more general result, especially if \mathcal{M} is not finite. To do this, the VC dimension of Vapnik-Chervonenkis must be used, which is based on the idea of shattering points (for a binary classification). Consider k points \{x_1,\cdot,x_k\}, and consider the set {E}_k=\big\lbrace(m(\boldsymbol{x}_1),\cdots,m(\boldsymbol{x}_k))\text{ for }m\in\mathcal{M})\big\rbrace Note that the elements of E_k belong to \{-1,+1\}^k. In other words, |E_k |\leq 2^k. We will say that M shatter all the points if all the combinations are possible, i. e. |E_k |=2^k. Intuitively, the labels of the set of points do not provide enough information on target f, because anything is possible. The VC dimension of \mathcal{M} is then VC(\mathcal{M})=\sup\big\lbrace k\text{ such that }\mathcal{M}\text{ shatters }\{\boldsymbol{x}_1,\cdots\boldsymbol{x}_k\}\big\rbrace

For example, if \mathcal{X}=\mathbb{R} and all (simple) models of the form [1] m_{a,b}=\mathbf{1}_{\pm}(x\in[a,b]) are considered. No set of \{x_1,x_2,x_2,x_3\} ordered points can be shattered because it is sufficient to assign respectively +1, -1 and +1 to x_1, x_2 and x_3 respectively, therefore VC<3. On the other hand \{0,1\} is shattered, so VC\geq 2. The dimension of this predictor set is 2: If we increase by one dimension, \mathcal{X}=\mathbb{R}^2 and consider all (simple) models of the form m_{a,b}=\mathbf{1}_{\pm} (x\in[a,b]) (where [a,b] refers to the rectangle), then the dimension of \mathcal{M} is here 4.

To introduce SVMs, let’s place ourselves in the case where \mathcal{X}=\mathbb{R}^k, and consider separations by hyperplanes passing through the origin (we will say homogeneous), in the sense that m_{\mathbf{w}} (\mathbf{x})=\mathbf{1}_{\pm}(\mathbf{w}^T \mathbf{x}\geq 0) . It can be shown that no set of k+1 points can be shattered by these two homogeneous spaces in \mathbb{R}^k, and therefore VC(M)=k. If we add a constant, in the sense that m_{\mathbf{w},b} (\mathbf{x})=\mathbf{1}_{\pm}(\mathbf{w}^T \mathbf{x}+b\geq 0), we can show that no set of k+2 points can be sprayed by these two (non-homogeneous) spaces in \mathbb{R}^k, and therefore VC(M)=k+1. This dimension reminds us of the dimension of the model we’ve seen in the econometric context.

From this dimension VC, we deduce the so-called fundamental theorem of learning: if \mathcal{M} is a class of dimension d=VC(M) , then there are positive constants \underline{C} and \overline{C} such as the sample complexity for M to be PAC-learnable satisfies \underline{C}\epsilon^{-1}\big(d+\log[\delta^{-1}]\big)\leq n_{\mathcal{M}}(\epsilon,\delta) \leq \overline{C}\epsilon^{-1}\big(d\log[\epsilon^{-1}]+\log[\delta^{-1}]\big)The link between the notion of learning (as defined in Vailiant (1984)) and the VC dimension was clearly established in Blumer et al (1989).

Nevertheless, while the work of Vapnik and Chervonenkis is considered to be the foundation of statistical learning, Thomas Cover’s work in the 1960s and 1970s should also be mentioned, in particular Cover (1965) on the capacities of linear models, and Cover & Hart (1967) on learning in the context of the algorithm of the k-nearest neighbors. These studies have linked learning, information theory (with the textbook Cover & Thomas (1991)), complexity and statistics. Other authors have subsequently brought the two communities closer together, in terms of learning and statistics. For example, Halbert White proposed to see neural networks in a statistical context in White (1989), going so far as to state that « learning procedures used to train artificial neural networks are inherently statistical techniques. It follows that statistical theory can provide considerable insight into the properties, advantages, and disadvantages of different network learning methods ». This turning point in the late 1980s will anchor learning theory in a probabilistic context.

Objective and loss function

These choices (of objective and loss function) are essential, and very dependent on the problem under consideration. Let us begin by describing a historically important model, Rosenblatt’s (1958) “perceptron”, introduced into classification problems, where y\in\{-1,+1\}, inspired by McCulloch & Pitts (1943). We have data \{(y_i,\mathbf{x}_i)\}, and we will iteratively build a set of m_k[\mathbf{x} models, where at each step, we will learn from the errors of the previous model. In the perceptron, a linear model is considered so that :m(\mathbf{x})=\boldsymbol{1}_{\pm}(\beta_0+\mathbf{x}^T \boldsymbol{\beta}\geq 0)=\left\lbrace\begin{array}{l}+1\text{ si }\beta_0+\mathbf{x}^T \boldsymbol{\beta}\geq 0\\-1\text{ si }\beta_0+\mathbf{x}^T \boldsymbol{\beta}< 0\end{array}\right.where \beta coefficients are often interpreted as “weights” assigned to each of the explanatory variables. We give ourselves initial weights (\beta_0^{(0)},\beta^{(0)} , which we will update taking into account the prediction error made, between y_i and the prediction \widehat{y}_i^{(k)} :\widehat{y}_i^{(k)}=m^{(k)}(\mathbf{x}_i)=\boldsymbol{1}_{\pm}(\beta_0^{(k)}+\mathbf{x}^T \boldsymbol{\beta}^{(k)}\geq 0), with, in the case of the perceptron:\beta_j^{(k+1)}={\beta}_j^{(k)}+\eta\underbrace{(\mathbf{y}-\widehat{\mathbf{y}}^{(k)})^T}_{=\ell({\mathbf{y}},\widehat{\mathbf{y}}^{(k)})}\mathbf{x}_jHere \ell(y,y')=\mathbf{1}(y\neq y') is a loss function, which will allow to give a price to an error made, by predicting \widehat{y}=m(\mathbf{x}) and observing y. For a regression problem, we can consider a quadratic error \ell_2, such that \ell(y,m(\mathbf{x}))=(y-m(\mathbf{x}))^2 or in absolute value \ell_1, with \ell(y,m(\mathbf{x}))=|y-m(\mathbf{x})|. Here, for our classification problem, we used a mis-qualification indicator (we could discuss the symmetry of this loss function, suggesting that a false positive costs as much as a false negative). Once this loss function has been specified, we recognize in the problem previously described a gradient descent, and we see that we are trying to solve:m^\star(\mathbf{x})=\underset{m\in\mathcal{M}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,m(\mathbf{x}_i))\right\rbrace~~~(6)for a predefined set of predictors \mathcal{M}. Any machine learning problem is mathematically formulated as an optimization problem, whose solution determines a set of model parameters (if the \mathcal{M} family is described by a set of parameters – which can be coordinates in a functional database). We can note \mathcal{M}_0 the space of the hyperplanes of \mathbb{R}^p in the sense thatm\in\mathcal{M}_0 \text{\quad means \quad}m(\mathbf{x})=\beta_0+\beta^T\mathbf{x}\text{ where }\beta\in\mathbb{R}^p generating the class of linear predictors. We will then have the estimator that minimizes the empirical risk. Some of the recent work in statistical learning aims to study the properties of the estimator \widehat{m}^\star, known as “oracle”, in a family of \mathcal{M} estimators, \widehat{m}^{\star} =\underset{\widehat{m}\in\mathcal{M}}{\text{argmin}}\big\lbrace\mathcal{R}(\widehat{m},m)\big\rbraceThis estimator is, of course, impossible to define because it depends on m, the real model, unknown.

But let’s come back a little more to these loss functions. A loss function \ell is a function \mathbb{R}^d\times\mathbb{R}^d\rightarrow\mathbb{R}_+, symmetric, which checks the triangular inequality, and such that \ell(x,y)=0 if and only if x=y. The associated norm is \|\cdot\|, such that \ell(x,y)=\|x-y\|=\ell(x-y,0) (using the fact that \ell(x,y+z)=\ell(x-y,z) – we will review this fundamental property later).

For a quadratic loss function, it should be noted that we can have a particular interpretation of this problem, since:\overline{y}=\underset{m\in\mathbb{R}}{\text{argmin}} \left\lbrace\sum_{i=1}^n\frac{1}{n} [y_i-m]^2\right\rbrace=\underset{m\in\mathbb{R}}{\text{argmin}} \left\lbrace \sum_{i=1}^n \ell_2(y_i,m)\right\rbrace where \ell_2 is the usual quadratic distance If we assume – as we did in econometrics – that there is an underlying probabilistic model, and observe that : \displaystyle{\mathbb{E}(Y)=\underset{m\in\mathbb{R}}{\text{argmin}}\left\lbrace\mathbb{E}\left([Y-m]^2\right)\right\rbrace=\underset{m\in\mathbb{R}}{\text{argmin}}\left\lbrace\mathbb{E}\big[\ell_2(Y,m)\big]\right\rbrace}it should be noted that what we are trying to obtain here, by solving the problem (6) by taking the norm \ell_2, is an approximation (in a given functional space, \mathcal{M}) of the conditional expectation x\mapsto\mathbb{E}[Y|\mathbf{X}=\mathbf{x}]. Another particularly interesting loss function is the loss \ell_1, \ell_1 (y,m)=|y-m|[\latex]. It should be recalled that [latex display="true"]\displaystyle{\text{median}(\boldsymbol{y})=\underset{m\in\mathbb{R}}{\text{argmin}}\left\lbrace\sum_{i=1}^n\ell_1(y_i,m)\right\rbrace}The optimization problem :\widehat{m}^{\star}=\underset{m\in\mathcal{M}_0}{\text{argmin}}\left\lbrace\sum_{i=1}^n\vert y_i-m(\mathbf{x}_i)\vert\right\rbrace is obtained in econometrics by assuming that the conditional law of Y follows a Laplace law centered on m(\mathbf{x}), and by maximizing the likelihood (log) (the sum of the absolute values of the errors corresponds to the log-reasonableness of a Laplace law). It should also be noted that if the conditional law of Y is symmetrical with respect to 0, the median and the mean coincide If this loss function is rewritten   \ell_1(y,m)=\vert (y-m)(1/2-\boldsymbol{1}_{y\leq m})\vert a generalization can be obtained for \tau\in[0.1]:\widehat{m}^\star_\tau=\underset{m\in\mathcal{M}_0}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell_\tau^{ q} (y_i,m(\mathbf{x}_i)) \right\rbracewhere\ell_{\tau}^{q}(x,y)= (x-y)(\tau-\boldsymbol{1}_{x\leq y})  is then the quantile regression of level \tau (Koenker, 2003; d'Haultefœuille & Givord, 2014). Another loss function, introduced by Aigner et al (1977) and analysed in Waltrup et al (2014), is the function associated with the notion of expectations: \displaystyle{\ell}^{\text{ e}}_{\tau}(x,y)= (x-y)^2\cdot\big\vert\tau-\boldsymbol{1}_{x\leq y}\big\vertwith \tau\in[0.1]. We see the parallel with the quantile function: \displaystyle{\ell}^{\text{ q}}_{\tau}(x,y)= \vert x-y\vert \cdot\big\vert\tau-\boldsymbol{1}_{x\leq y}\big\vertKoenker & Machado (1999) and Yu & Moyeed (2001) also noted a link between this condition and the search for maximum likelihood when Y's conditional law follows an asymmetric Laplace law.

In connection with this approach, Gneiting (2011) introduced the notion of "ellicable statistics" - or "ellicable measurement" in its probabilistic (or distributional) version: a statistic T will be said to be "ellicitable" if there is a loss function \ell:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}_+ such that:T(Y)=\underset{x\in\mathbb{R}}{\text{argmin}}\left\lbrace\int_{\mathbb{R}} \ell(x,y)dF(y)\right\rbrace=\underset{x\in\mathbb{R}}{\text{argmin}}\left\lbrace\mathbb{E}\big[ \ell(x,Y)\big]\text{ where }Y\overset{\mathcal{L}}{\sim} F\right\rbrace The mean (mathematical expectation) is thus ellicable by the quadratic distance, \ell_2, while the median is ellicable by the distance \ell_1. According to Gneiting (2011), this property is essential for obtain predictions and forecasts. There may then be a strong link between measures associated with probabilistic models and loss functions. Finally, Bayesian statistics provide a direct link between the form of the a priori law and the loss function, as studied by Berger (1985) and Bernardo & Smith (2000). We will come back to the use of these different norms in the section on penalization.

To be continued (keep in mind that references are online here)…

[1] Where the indicator \mathbf{1}_{\pm} does not take values 0 or 1 (like the classical \mathbf{1} function), but -1 and +1.

Foundations of Machine Learning, part 1

This post is the fifth one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 4 is online here.

In parallel with these tools developed by, and for economists, a whole literature has been developed on similar issues, centered on the problems of prediction and forecasting. For Breiman (2001a), a first difference comes from the fact that the statistic has developed around the principle of inference (or to explain the relationship linking y to variables \mathbf{x}) while another culture is primarily interested in prediction. In a discussion that follows the article, David Cox states very clearly that in statistic (and econometrics) “predictive success (…) is not the primary basis for model choice“. We will get back here on the roots of automatic learning techniques. The important point, as we will see, is that the main concern of machine learning is related to the generalization properties of a model, i.e. its performance – according to a criterion chosen a priori – on new data, and therefore on non-sample tests.

A learning machine

Today, we speak of “machine learning” to describe a whole set of techniques, often computational, as alternatives to the classical econometric approach. Before characterizing them as much as possible, it should be noted that historically other names have been given. For example, Friedman (1997) proposes to make the link between statistics (which closely resemble econometric techniques – hypothesis testing, ANOVA, linear regression, logistics, GLM, etc.) and what was then called “data mining” (which then included decision trees, methods from the closest neighbours, neural networks, etc.). The bridge between those two cultures corresponds to “statistical learning” techniques described in Hastie et al (2009). But one should keep in mind that machine learning is a very large field of research.

The so-called “natural” learning (as opposed to machine learning) is that of children, who learn to speak, read and play. Learning to speak means segmenting and categorizing sounds, and associating them with meanings. A child also learns simultaneously the structure of his or her mother tongue and acquires a set of words describing the world around him or her. Several techniques are possible, ranging from rote learning, generalization, discovery, more or less supervised or autonomous learning, etc. The idea in artificial intelligence is to take inspiration from the functioning of the brain to learn, to allow “artificial” or “automatic” learning, by a machine. A first application was to teach a machine to play a game (tic-tac-toe, chess, go, etc.). An essential step is to explain the objective it must achieve to win. One historical approach has been to teach the machine the rules of the game. If it allows you to play, it will not help the machine to play well. Assuming that the machine knows the rules of the game, and that it has a choice between several dozen possible moves, which one should it choose? The classical approach in artificial intelligence uses the so-called min-max algorithm using an evaluation function: in this algorithm, the machine searches forward in the possible moves tree, as far as the calculation resources allow (about ten moves in chess, for example). Then, it calculates different criteria (which have been previously indicated to her) for all positions (number of pieces taken, or lost, occupancy of the center, etc. in our example of the chess game), and finally, the machine plays the move that allows it to maximize its gain. Another example may be the classification and recognition of images or shapes. For example, the machine must identify a number in a handwritten handwriting (checks, ZIP code on envelopes, etc). It is a question of predicting the value of a variable y, knowing that a priori y\in\{0,1,2,\cdots,8,9\}. A classical strategy is to provide the machine with learning bases, in other words here millions of labelled (identified) images of handwritten numbers. A simple (and natural) strategy is to use a decision criterion based on the closest neighbors whose labels are known (using a predefined metric).

The method of the closest neighbors (“k-nearest neighbors”) can be described as follows: we consider (as in the previous part) a set of n observations, i. e. pairs (y_i,\mathbf{x}_i) with \mathbf{x}_i\in\mathbb{R}^p. Let us consider a distance \Delta on \mathbb{R}^p (the Euclidean distance or the Mahalanobis distance, for example). Given a new observation \mathbf{x}\in\mathbb{R}^p, let us assume the ordered observations as a function of the distance between the \mathbf{x}_i and \mathbf{x}, in the sense that \Delta(\mathbf{x}_1, \mathbf{x})\leq\Delta(\mathbf{x}_2, \mathbf{x})\leq\cdots\leq\Delta(\mathbf{x}_n, \mathbf{x}) then we can consider as prediction for y the average of the nearest k neighbours,\widehat{m}_k(\mathbf{x})=\frac{1}{k}\sum_{i=1}^k y_iLearning here works by induction, based on a sample (called the learning – or training – sample).

Automatic learning includes those algorithms that give computers the ability to learn without being explicitly programmed (as Arthur Samuel defined it in 1959). The machine will then explore the data with a specific objective (such as searching for the nearest neighbours in the example just described). Tom Mitchell proposed a more precise definition in 1998: a computer program is said to learn from experience E in relation to a task T and a performance measure P, if its performance on T, measured by P, improves with experience E. Task T can be a defect score for example, and performance P can be the percentage of errors made. The system learns if the percentage of predicted defects increases with experience.

As we can see, machine learning is basically a problem of optimizing a criterion based on data (from now on called learning). Many textbooks on machine learning techniques propose algorithms, without ever mentioning any probabilistic model. In Watt et al (2016) for example, the word “probability” is mentioned only once, with this footnote that will surprise and make smile any econometricians, “the logistic regression can also be interpreted from a probabilistic perspective” (page 86). But many recent books offer a review of machine learning approaches using probabilistic theories, following the work of Vaillant and Vapnik. By proposing the paradigm of “probably almost correct” learning (PAC), a probabilistic flavor has been added to the previously very computational approach, by quantifying the error of the learning algorithm (usually in a classification problem).

To be continued (references are online here)…

Probabilistic Foundations of Econometrics, part 4

This post is the fourth one of our series on the history and foundations of econometric and machine learning models. Part 3 is online here.

Goodness of Fit, and Model

In the Gaussian linear model, the determination coefficient – noted R^2 – is often used as a measure of fit quality. It is based on the variance decomposition formula \underbrace{\frac{1}{n}\sum_{i=1}^n (y_i-\bar{y})^2}_{\text{total variance}}=\underbrace{\frac{1}{n}\sum_{i=1}^n (y_i-\widehat{y}_i)^2}_{\text{residual variance}}+\underbrace{\frac{1}{n}\sum_{i=1}^n (\widehat{y}_i-\bar{y})^2}_{\text{explained variance}} The R^2 is defined as the ratio of explained variance and total variance, another interpretation of the coefficient that we had introduced from the geometry of the least squares R^2= \frac{\sum_{i=1}^n (y_i-\bar{y})^2-\sum_{i=1}^n (y_i-\widehat{y}_i)^2}{\sum_{i=1}^n (y_i-\bar{y})^2}The sums of the error squares in this writing can be rewritten as a log-likelihood. However, it should be remembered that, up to one additive constant (obtained with a saturated model) in generalized linear models, deviance is defined by {Deviance}(\widehat{\beta}) = -2\log[\mathcal{L}] which can also be noted Deviance(\widehat{\mathbf{y}}). A null deviance can be defined as the one obtained without using the explanatory variables \mathbf{x}, so that \widehat{y}_i=\overline{y}. It is then possible to define, in a more general context (with a non-Gaussian distribution for y)R^2=\frac{{Deviance}(\overline{y})-{Deviance}(\widehat{\mathbf{y}})}{{Deviance}(\overline{y})}=1-\frac{{Deviance}(\widehat{\mathbf{y}})}{{Deviance}(\overline{y})}However, this measure cannot be used to choose a model, if one wishes to have a relatively simple model in the end, because it increases artificially with the addition of explanatory variables without significant effect. We will then tend to prefer the adjusted R^2,\bar R^2 = {1-(1-R^{2})\cdot{n-1 \over n-p}} = R^{2}-\underbrace{(1-R^{2})\cdot{p-1 \over n-p}}_{\text{penalty}}where p is the number of parameters of the model. Measuring the quality of fit will penalize overly complex models.

This idea will be found in the Akaike criterion, where AIC=Deviance+2\cdot p or in the Schwarz criterion, BIC=Deviance+log(n)\cdot p. In large dimensions (typically p>\sqrt{n}), we will tend to use a corrected AIC, defined by AIC_c=Deviance+2⋅p⋅n/(n-p-1) .

These criterias are used in so-called “stepwise” methods, introducing the set methods. In the “forward” method, we start by regressing to the constant, then we add one variable at a time, retaining the one that lowers the AIC criterion the most, until adding a variable increases the AIC criterion of the model. In the “backward” method, we start by regressing on all variables, then we remove one variable at a time, removing the one that lowers the AIC criterion the most, until removing a variable increases the AIC criterion from the model.

Another justification for this notion of penalty (we will come back to this idea in machine learning) can be the following. Let us consider an estimator in the class of linear predictors, \mathcal{M}=\big\lbrace m:~m(\mathbf{x})=s_h(\mathbf{x})^T\mathbf{y} \text{ where }S=(s(\mathbf{x}_1),\cdots,s(\mathbf{x}_n))^T\text{ is some smoothing matrix}\big\rbrace and assume that y=m_0 (x)+\varepsilon, with \mathbb{E}[\varepsilon]=0 and Var[\varepsilon]=\sigma^2\mathbb{I}, so that m_0 (x)=\mathbb{E}[Y|X=x] . From a theoretical point of view, the quadratic risk, associated with an estimated model \widehat{m}, \mathbb{E}\big[(Y-\widehat{m}(\mathbf{X}))^2\big], is written\mathcal{R}(\widehat{m})=\underbrace{\mathbb{E}\big[(Y-m_0(\mathbf{X}))^2\big]}_{\text{error}}+\underbrace{\mathbb{E}\big[(m_0(\mathbf {X})-\mathbb{E}[\widehat{m}(\mathbf{X})])^2\big]}_{\text{bias}^2}+\underbrace{\mathbb{E}\big[(\mathbb{E}[\widehat{m}(\mathbf{X})]-\widehat{m}(\mathbf{X}))^2\big]}_{\text{variance}} if m_0 is the true model. The first term is sometimes called “Bayes error”, and does not depend on the estimator selected, \widehat{m}.

The empirical quadratic risk, associated with a model m, is here: \widehat{\mathcal{R}}_n(m)=\frac{1}{n}\sum_{i=1}^n (y_i-m(\mathbf{x}_i))^2 (by convention). We recognize here the mean square error, “mse”, which will more generally give the “risk” of the model m when using another loss function (as we will discuss later on). It should be noted that:\displaystyle{\mathbb{E}[\widehat{\mathcal{R}}_n(m)]=\frac{1}{n}\|m_0(\mathbf{x})-m(\mathbf{x})\|^2+\frac{1}{n}\mathbb{E}\big(\|{Y}-m_0(\mathbf{X})\|^2\big)} We can show that:n\mathbb{E}\big[\widehat{\mathcal{R}}_n(\widehat{m})\big]=\mathbb{E}\big(\|Y-\widehat{m}(\mathbf{x})\|^2\big)=\|(\mathbb{I}-\mathbf{S})m_0\|^2+\sigma^2\|\mathbb{I}-\mathbf{S}\|^2so that the (real) risk of \widehat{m} is: {\mathcal{R}}_n(\widehat{m})=\mathbb{E}\big[\widehat{\mathcal{R}}_n(\widehat{m})\big]+2\frac{\sigma^2}{n}\text{trace}(\boldsymbol{S})So, if \text{trace}(\boldsymbol{S})\geq0 (which is not a too strong assumption), the empirical risk underestimates the true risk of the estimator. Actually, we recognize here the number of degrees of freedom of the model, the right-hand term corresponding to Mallow’s C_p, introduced in Mallows (1973) using not deviance but R^2.

Statistical Tests

The most traditional test in econometrics is probably the significance test, corresponding to the nullity of a coefficient in a linear regression model. Formally, it is the test of H_0:\beta_k=0 against H_1:\beta_k\neq 0. The so-called Student test, based on the statistics t_k=\widehat{\beta}_k/se_{\widehat{β}_k}, allows to decide between the two alternatives, using the test p-value, defined by \mathbb{P}[|T|>|t_k|] avec T\overset{\mathcal{L}}{\sim} Std_\nu, where \nu is the number of degrees of freedom of the model (\nu=p+1 for the standard linear model). In large dimension, however, this statistic is of very limited interest, given a significant FDR (“False Discovery Ratio”). Classically, with a level of significance \alpha=0.05, 5% of the variables are falsely significant. Suppose that we have p=100 explanatory variables, but that 5 (only) are really significant. We can hope that these 5 variables will pass the Student test, but we can also expect that 5 additional variables (false positive test) will emerge. We will then have 10 variables perceived as significant, while only half are significant, i.e. an FDR ratio of 50%. In order to avoid this recurrent pitfall in multiple tests, it is natural to use the procedure of Benjamini & Hochberg (1995).

From a correlation to some causal effect

Econometric models are used to implement public policy evaluations. It is therefore essential to fully understand the underlying mechanisms in order to know which variables actually make it possible to act on a variable of interest. But then we move on to another important dimension of econometrics. Jerry Neyman was responsible for the first work on the identification of causal mechanisms, and then Rubin (1974) formalized the test, called the “Rubin causal model” in Holland (1986). The first approaches to the notion of causality in econometrics were based on the use of instrumental variables, models with discontinuity of regression, analysis of differences in differences, and natural or unnatural experiments. Causality is usually inferred by comparing the effect of a policy – or more generally of a treatment – with its counterfactual, ideally given by a random control group. The causal effect of the treatment is then defined as \Delta=y_1-y_0, i.e. the difference between what the situation would be with treatment (noted t=1) and without treatment (noted t=0). The concern is that only y=t\cdot y_1+(1-t)\cdot y_0 and t are observed. In other words, the causal effect of variable t  on t  is not observed (since only one of the two potential variables – y_0 or y_1  is observed for each individual), but it is also individual, and therefore a function of x-covariates. Generally, by making assumptions about the distribution of the triplet (Y_0,Y_1,T) , some parameters of the causal effect distribution become identifiable, based on the density of the observable variables (Y,T) . Classically, we will be interested in the moments of this distribution, in particular the average effect of treatment in the population, \mathbb{E}[\Delta] , or even just the average effect of treatment in the case of treatment \mathbb{E}[\Delta|T=1] . If the result (Y_0,Y_1) is independent of the processing access variable T, it can be shown that \mathbb{E}[\Delta]=\mathbb{E}[Y|T=1]- \mathbb{E} [Y|T=0]. But if this independence hypothesis is not verified, there is a selection bias, often associated with \mathbb{E}[Y_0|T=1]- \mathbb{E} [Y_0|T=0]. Rosenbaum & Rubin (1983) propose to use a propensity to be treated score, p(x)=\mathbb{P}[T=1|X=x] , noting that if variable Y_0\ is independent of access to treatment T conditionally to the explanatory variables X, then it is independent of T  conditionally to the score p(X) : it is sufficient to match them using their propensity score. Heckman et al (2003) thus proposes a kernel estimator on the propensity score, which simply provides an estimator of the effect of the treatment, provided that it is treated.

To be continued next time, we’ll introduce “machine learning techniques” (references mentioned above are online here)

Probabilistic Foundations of Econometrics, part 3

This post is the third one of our series on the history and foundations of econometric and machine learning models. Part 2 is online here.

Exponential family and linear models

The Gaussian linear model is a special case of a large family of linear models, obtained when the conditional distribution of Y (given the covariates) belongs to the exponential family f(y_i|\theta_i,\phi)=\exp\left(\frac{y_i\theta_i-b(\theta_i)}{a(\phi)}+c(y_i,\phi)\right) with \theta_i=\psi(\mathbf{x}_i^T \beta). Functions a, b and c are specified according to the type of exponential law (studied extensively in statistics since Darmoix (1935), as Brown (1986) reminds us), and \psi is a one-to-one mapping that the user must specify. Log-likelihood then has a simple expression \log\mathcal{L}(\mathbf{\theta},\phi|\mathbf{y}) =\frac{\sum_{i=1}^ny_i\theta_i-\sum_{i=1}^nb(\theta_i)}{a(\phi)}+\sum_{i=1}^n c(y_i,\phi) and the first order condition is then written \frac{\partial \log \mathcal{L}(\mathbf{\theta},\phi|\mathbf{y})}{\partial \mathbf{\beta}} = \mathbf{X}^T\mathbf{W}^{-1}[\mathbf{y}-\widehat{\mathbf{y}}]=\mathbf{0} based on Müller’s (2011) notations, where \mathbf{W} is a weight matrix (which depends on \beta). Given the link between \theta and the expectation of Y, instead of specifying the function \psi(\cdot) , we will tend to specify the link function g(\cdot) defined by \widehat{y}=m(\mathbf{x})=\mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=g^{-1} (\mathbf{x}^T \beta) For the Gaussian linear regression we consider an identity link, while for the Poisson regression, the natural link (called canonical) is the logarithmic link. Here, as \mathbf{W} depends on \beta (with \mathbf{W}=diag(\nabla g(\widehat{\mathbf{y}})Var[\mathbf{y}]) there is generally no explicit formula for the maximum likelihood estimator. But an iterative algorithm makes it possible to obtain a numerical approximation. By setting \mathbf{z}=g(\widehat{\mathbf{y}})+(\mathbf{y}-\widehat{\mathbf{y}})\cdot\nabla g(\widehat{\mathbf{y}}) corresponding to the error term of a Taylor development in order 1 of g, we obtain an algorithm of the form\widehat{\beta}_{k+1}=[\mathbf{X}^T \mathbf{W}_k^{-1} \mathbf{X}]^{-1} \mathbf{X}^T \mathbf{W}_k^{-1} \mathbf{z}_kBy iterating, we will define \widehat{\beta}=\widehat{\beta}_{\infty}, and we can show that – with some additional technical assumptions (detailed in Müller (2011)) – this estimator is asymptotically Gaussian, with \sqrt{n}(\widehat{\beta} -\beta)\overset{\mathcal{L}}{\rightarrow} \mathcal{N}(\mathbf{0},I(β)^{-1}) where numerically I(\beta)=\varphi\cdot[\mathbf{X}^T \mathbf{W}_\infty^{-1} \mathbf{X}] .

From a numerical point of view, the computer will solve the first-order condition, and actually, the law of Y does not really intervene. For example, one can estimate a “Poisson regression” even when observations are not integers (but they need to be positive). In other words, the law of Y is only an interpretation here, and the algorithm could be introduced in a different way (as we will see later on), without necessarily having an underlying probabilistic model.

Logistic Regression

Logistic regression is the generalized linear model obtained with a Bernoulli’s law, and a link function which is the quantile function of a logistic law (which corresponds to the canonical link in the sense of the exponential family). Taking into account the form of Bernoulli’s law, econometrics proposes a model for y_i\in\{0,1\}, in which the logarithm of the odds follows a linear model: \log\left(\frac{\mathbb{P}[Y=1\vert \mathbf{X}=\mathbf{x}]}{\mathbb{P}[Y\neq 1\vert \mathbf{X}=\mathbf{x}]}\right)=\beta_0+\mathbf{x}^T\beta or \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=\frac{e^{\beta_0+\mathbf{x}^T\beta}}{1+ e^{\beta_0+\mathbf{x}^T\beta}}=H(\beta_0+\mathbf{x}^T\beta) where H(\cdot)=\exp(\cdot)/(1+exp(\cdot)) is the cumulative distribution function of the logistic law. The estimation of (\beta_0,\beta) is performed by maximizing the likelihood: \mathcal{L}=\prod_{i=1}^n \left(\frac{e^{\mathbf{x}_i^T\mathbf{\beta}}}{1+e^{\boldsymbol{x}_i^T\mathbf{\beta}}}\right)^{y_i}\left(\frac{1}{1+e^{\mathbf{x}_i^T\mathbf{\beta}}}\right)^{1-y_i} It is said to be a linear models because isoprobability curves here are the parallel hyperplanes b+\mathbf{x}^T\beta . Rather than this model, popularized by Berkson (1944), some will prefer the probit model (see Berkson, 1951), introduced by Bliss (1934). In this model: \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=\Phi (\beta_0+\mathbf{x}^T\beta)

where \Phi denotes the distribution function of the reduced centred normal distribution. This model has the advantage of having a direct link with the Gaussian linear model, since y_i=\mathbf{1}(y_i^\star>0) with y_i^\star=\beta_0+\mathbf{x}^T \beta+\varepsilon_i where the residuals are Gaussian, \mathcal{N}(0,\sigma^2). An alternative is to have centered residuals of unit variance, and to consider a latent modeling of the form y_i=\mathbf{1}(y_i^\star>\xi) (where \xi will be fixed). As we can see, these techniques are fundamentally linked to an underlying stochastic model. In the body of the article, we present several alternative techniques – from the learning literature – for this classification problem (with two classes, here 0 and 1).

Regression in high dimension

As we mentioned earlier, the first order condition \mathbf{X}^T (\mathbf{X}\widehat{\beta}-\mathbf{y})=\mathbf{0} is solved numerically by performing a QR decomposition, at a cost which consists in O(np^2) operations (where p is the rank of \mathbf{X}^T \mathbf{X}). Numerically, this calculation can be long (either because p is large or because n is large), and a simpler strategy may be to sub-sample. Let n_s\ll n, and consider a sub-sample size n_s of \{1,\cdots,n\}. Then \widehat{\beta}_s=(\mathbf{X}_s^T \mathbf{X}_s )^{-1} \mathbf{X}_s^T\mathbf{y}_s is a good approximation of \beta as shown by Dhillon et al. (2014). However, this algorithm is dangerous if some points have a high leverage (i.e. L_i=\mathbf{x}_i(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{x}_i^T). Tropp (2011) proposes to transform the data (in a linear way), but a more popular approach is to do non-uniform sub-sampling, with a probability related to the influence of observations (defined by I_i=\widehat{\varepsilon}_iL_i/(1-L_i)^2 , and which unfortunately can only be calculated once the model is estimated).

In general, we will talk about massive data when the data table of size does not fit in the RAM memory of the computer. This situation is often encountered in statistical learning nowadays with very often p\ll n. This is why, in practice, many libraries of algorithms assimilated to machine learning use iterative methods to solve the first-order condition. When the parametric model to be calibrated is indeed convex and semi-differentiable, it is possible to use, for example, the stochastic gradient descent method as suggested by Bottou (2010). This last one allows to free oneself at each iteration from the calculation of the gradient on each observation of our learning base. Rather than making an average descent at each iteration, we start by drawing (without replacement) an observation \mathbf{x}_i among the n available. The model parameters are then corrected so that the prediction made from \mathbf{x}_i is as close as possible to the true value y_i. The method is then repeated until all the data have been reviewed. In this algorithm there is therefore as much iteration as there are observations. Unlike the gradient descent algorithm (or Newton’s method) at each iteration, only one gradient vector is calculated (and no longer n). However, it is sometimes necessary to run this algorithm several times to increase the convergence of the model parameters. If the objective is, for example, to minimize a loss function \ell between the estimator m_\beta (\mathbf{x}) and y (like the quadratic loss function, as in the Gaussian linear regression) the algorithm can be summarized as follows:

  • Step 0: Mix the data
  • Iteration step: For t=1,\cdots, n, we pull i\in\{1,\cdots,n\} without replacement, and we set \beta^{t+1} = \beta^{t} - \gamma_t\frac{ \partial{\ell(y_i,m_{\beta^t}(X_i)) } }{ \partial{ \beta}}

This algorithm can be repeated several times as a whole depending on the user’s needs. The advantage of this method is that at each iteration, it is not necessary to calculate the gradient on all observations (more sum). It is therefore suitable for large databases. This algorithm is based on a convergence in probability towards a neighborhood of the optimum (and not the optimum itself).

(references will be given in the very last post of that series) To be continued

Probabilistic Foundations of Econometrics, part 2

This post is the second one of our series on the history and foundations of econometric and machine learning models. Part 1 is online here.

Geometric Properties of this Linear Model

Let’s define the scalar product in \mathbb{R}^n, ⟨\mathbf{a},\mathbf{b}⟩=\mathbf{a}^T\mathbf{b}, and let’s note \|\cdot\| the associated Euclidean standard, \|\mathbf{a}\|=\sqrt{\mathbf{a}^T\mathbf{a}} (denoted \|\cdot\|_{\ell_2} in the next post). Note \mathcal{E}_X the space generated by all linear combinations of the \mathbf{X} components (adding the constant). If the explanatory variables are linearly independent, \mathbf{X} is a full (column) rank matrix and \mathcal{E}_X is a space of dimension p+1. Let’s assume from now on that the variables \mathbf{x}  and y are centered here. Note that no law hypothesis is made in this section, the geometric properties are derived from the properties of expectation and variance in the set of finite variance variables.

With this notation, it should be noted that the linear model is written m(\mathbf{x})=⟨\mathbf{x},\beta⟩. The space H_z=\{\mathbf{x}\in\mathbb{R}^{p+1}:m(\mathbf{x})=z\} is a hyperplane (affine) that separates the space in two. Let’s define the orthogonal projection operator on \mathcal{E}_X, \Pi_X =\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T. Thus, the forecast that can be made for it is: \widehat{\mathbf{y}}=\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T\mathbf{y}=\Pi_X\mathbf{y}. As, \widehat{\varepsilon}=\mathbf{y}-\widehat{\mathbf{y}}=(\mathbb{I}-\Pi_X)\mathbf{y}=\Pi_{X^\perp}\mathbf{y}, we note that \widehat{\varepsilon}\perp\mathbf{x}, which will be interpreted as meaning that residuals are a term of innovation, unpredictable in the sense that \Pi_{X }\widehat{\varepsilon}=\mathbf{0}. The Pythagorean theorem is written here: \Vert \mathbf{y} \Vert^2=\Vert \Pi_{ {X}}\mathbf{y} \Vert^2+\Vert \Pi_{ {X}^\perp}\mathbf{y} \Vert^2=\Vert \Pi_{ {X}}\mathbf{y}\Vert^2+\Vert \mathbf{y}-\Pi_{ {X}}\mathbf{y}\Vert^2=\Vert\widehat{\mathbf{y}}\Vert^2+\Vert\widehat{\mathbf{\varepsilon}}\Vert^2which is classically translated in terms of the sum of squares: \underbrace{\sum_{i=1}^n y_i^2}_{n\times\text{total variance}}=\underbrace{\sum_{i=1}^n \widehat{y}_i^2}_{n\times\text{explained variance}}+\underbrace{\sum_{i=1}^n (y_i-\widehat{y}_i)^2}_{n\times\text{residual variance}} The coefficient of determination, R^2, is then interpreted as the square of the cosine of the angle \theta between \mathbf{y} and \Pi_X \mathbf{y} : R^2=\frac{\Vert \Pi_{{X}} \mathbf{y}\Vert^2}{\Vert \mathbf{y}\Vert^2}=1-\frac{\Vert \Pi_{ {X}^\perp} \mathbf{y}\Vert^2}{\Vert \mathbf {y}\Vert^2}=\cos^2(\theta)An important application was obtained by Frish & Waugh (1933), when the explanatory variables are divided into two groups, \mathbf{X}=[\mathbf{X}_1 |\mathbf{X}_2], so that the regression becomes y=\beta_0+\mathbf{X}_1 β_1+\mathbf{X}_2 β_2+\varepsilon. Frish & Waugh (1933) showed that two successive projections could be considered. Indeed, if \mathbf{y}_2^\star=\Pi_{X_1^\perp} \mathbf{y} and X_2^\star=\Pi_{X_1^\perp}\mathbf{X}_2, we can show that \widehat{\beta} _2=[{\mathbf{X}_2^\star}^T \mathbf{X}_2^\star]^{-1}{\mathbf{X}_2^\star}^T \mathbf{y}_2^\star In other words, the overall estimate is equivalent to the combination of independent estimates of the two models if \mathbf{X}_2^\star=\mathbf{X}_2, i.e. \mathbf{X}_2\in \mathcal{E}_{X_1}^\perp, which can be noted \mathbf{x}_1\perp\mathbf{x}_2 We obtain here the Frisch-Waugh theorem which guarantees that if the explanatory variables between the two groups are orthogonal, then the overall estimate is equivalent to two independent regressions, on each of the sets of explanatory variables. This is a theorem of double projection, on orthogonal spaces. Many results and interpretations are obtained through geometric interpretations (fundamentally related to the links between conditional expectation and the orthogonal projection in space of variables of finite variance).

This geometric interpretation might help to get a better understanding of the problem of under-identification, i.e. the case where the real model would be y_i=\beta_0+ \mathbf{x}_1^T \beta_1+\mathbf{x}_2^T \beta_2+\varepsilon_i, but the estimated model is y_i=b_0+\mathbf{x}_1^T \mathbf{b}_1+\eta_i. The maximum likelihood estimator of \mathbf{b}_1 is \widehat{\mathbf{b}}_1=\mathbf {\beta}_1 + \underbrace{ (\mathbf {X}_1^T\mathbf {X}_1)^{-1} \mathbf {X}_1^T \mathbf {X}_{2} \mathbf{\beta}_2}_{\mathbf{\beta}_{12}}+\underbrace{(\mathbf{X}_1^{T}\mathbf{X}_1)^{-1} \mathbf{X}_1^T\varepsilon}_{\nu}so that \mathbb{E}[\widehat{\mathbf{b}}_1]=\beta_1+\beta_{12}, the bias ( \beta_{12}) being null only in the case where \mathbf{X}_1^T \mathbf{X}_2=\mathbf{0} (i. e. \mathbf{X}_1\perp \mathbf{X}_2 ): we find here a consequence of the Frisch-Waugh theorem.

On the other hand, over-identification corresponds to the case where the real model would be y_i=\beta_0+\mathbf{x}_1^T \beta_1+\varepsilon_i, but the estimated model is y_i=b_0+ \mathbf{x}_1^T \mathbf{b} _1+\mathbf{x}_2^T \mathbf{b}_2+\eta_i. In this case, the estimate is unbiased, in the sense that \mathbb{E}[\widehat{\mathbf{b}}_1]=\beta_1 but the estimator is not efficient. Later on, we will discuss an effective method for selecting variables (and avoid over-identification).

From parametric to non-parametric

We can rewrite equation (4) in the form \widehat{\mathbf{y}}=\Pi_X\mathbf{y} which helps us to see the forecast directly as a linear transformation of the observations. More generally, a linear predictor can be obtained by considering m(\mathbf{x})=\mathbf{s}_{\mathbf{x}}^T \mathbf{y}, where \mathbf{s}_{\mathbf{x}} is a weight vector, which depends on \mathbf{x}, interpreted as a smoothing vector. Using the vectors \mathbf{s}_{\mathbf{x}_i}, calculated from the observations \mathbf{x}_i, we obtain a matrix \mathbf{S} of size n\times n, and \widehat{\mathbf{y}}=\mathbf{S}\mathbf{y}. In the case of the linear regression described above, \mathbf{s}_{\mathbf{x}}=\mathbf{X}[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{x}, and in that case \text{trace}(\mathbf{S}) is the number of columns in the \mathbf{X} matrix (the number of explanatory variables). In this context of more general linear predictors, \text{trace}(\mathbf{S}) is often seen as equivalent to the number of parameters (or complexity, or dimension, of the model), and \nu=n-\text{trace}(\mathbf{S}) is then the number of degrees of freedom (see Ruppert et al., 2003; Simonoff, 1996). The principle of parsimony says that we should minimize this dimension (the trace of the matrix \mathbf{S}) as much as possible. But in the general case, this dimension is more to obtain, explicitely.

The estimator introduced by Nadaraya (1964) and Watson (1964), in the case of a simple non-parametric regression, is also written in this form since\widehat{m}_h(x)=\mathbf{s}_{x}^T\mathbf{y}=\sum_{i=1}^n \mathbf{s}_{x,i}y_iwhere\mathbf{s}_{x,i}=\frac{K_h(x-x_i)}{K_h(x-x_1)+\cdots+K_h(x-x_n)} where K(\cdot) is a kernel function, which assigns a value that is lower the closer x_i is to x, and h>0 is the bandwidth. The introduction of this metaparameter h is an important issue, as it should be chosen wisely. Using asymptotic developments, we can show that if X has density f, \text{biais}[\widehat{m}_h(x)]=\mathbb{E}[\widehat{m}_h(x)]-m(x)\sim {h^2}\left(\frac{C_1 }{2}m''(x)+C_2 m'(x)\frac{f'(x)}{f(x)}\right)and \displaystyle{{\text{Var}[\widehat{m}_h(x)]\sim\frac{C_3}{{nh}}\frac{\sigma(x)}{f(x)}}}for some constants that can be estimated (see Simonoff (1996) for a discussion). These two functions evolve inversely with h, as shown in Figure 1 (where the metaparameter on the x-axis is here, actually, h^{-1}). Keep in ming that we will see a similar graph in the context of machine learning models.

Figure 1. Choice of meta-parameter and the Goldilocks problem: it must not be too large (otherwise there is too much variance), nor too small (otherwise there is too much bias).

The natural idea is then to try to minimize the mean square error, the MSE, defined as bias[\widehat{m}_h (x)]^2+Var[\widehat{m}_h (x)], and them integrate over x, which gives an optimal value for h of the form h^\star=O(n^{-1/5}) , and reminds us of Silverman’s rule – see Silverman (1986). In larger dimensions, for continuous \mathbf{x} variables, a multivariate kernel with matrix bandwidth \mathbf{H} can be used, and \mathbb{E}[\widehat{m}_{\mathbf{H}}(\mathbf{x})]\sim m(\mathbf{x})+\frac{C_1}{2}\text{trace}\big(\mathbf{H}^Tm''(\mathbf{x})\mathbf{H}\big)+C_2\frac{m'(\boldsymbol{x})^T\mathbf{H}\mathbf{H}^T \nabla f(\mathbf{x})}{f(\mathbf{x})}while\text{Var}[\widehat{m}_{\mathbf{H}}(\mathbf{x})]\sim\frac{C_3}{n~\text{det}(\mathbf{H})}\frac{\sigma(\mathbf{x})}{f(\mathbf{x})}
If \mathbf{H} is a diagonal matrix, with the same term h  on the diagonal, then h^\star=O(n^{-1/(4+dim(\mathbf{x}))}. However, in practice, there will be more interest in the integrated version of the quadratic error, MISE(\widehat{m}_{h})=\mathbb{E}[MSE(\widehat{m}_{h}(X))]=\int MSE(\widehat{m}_{h}(x))dF(x)and we can prove that MISE[\widehat{m}_h]\sim \overbrace{\frac{h^4}{4}\left(\int x^2k(x)dx\right)^2\int\big[m''(x)+2m'(x)\frac{f'(x)}{f(x)}\big]^2dx}^{\text{bias}^2} +\overbrace{\frac{\sigma^2}{nh}\int k^2(x)dx \cdot\int\frac{dx}{f(x)}}^{\text{variance}}as n→∞ and nh→∞. Here we find an asymptotic relationship that again recalls Silverman’s (1986) order of magnitude, h^\star =n^{-\frac{1}{5}}\left(\frac{C_1\int \frac{dx}{f(x)}}{C_2\int \big[m''(x)+2m'(x)\frac{f'(x)}{f(x)}\big]dx}\right)^{\frac{1}{5}}The main problem here, in practice, is that many of the terms in the expression above are unknown. Automatic learning offers computational techniques, when the econometrician used to searching for asymptotic (mathematical) properties.

To be continued (references mentioned above are online here)…

Probabilistic Foundations of Econometrics, part 1

In a series of posts, I wanted to get into details of the history and foundations of econometric and machine learning models. It will be some sort of online version of our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics. This is the first one…

The importance of probabilistic models in economics is rooted in Working’s (1927) questions and the attempts to answer them in Tinbergen’s two volumes (1939). The latter have subsequently generated a great deal of work, as recalled by Duo (1993) in his book on the foundations of econometrics, and more particularly in the first chapter “The Probability Foundations of Econometrics”. It should be recalled that Trygve Haavelmo was awarded the Nobel Prize in Economics in 1989 for his “clarification of the foundations of the probabilistic theory of econometrics”. Because as Haavelmo (1944) (initiating a profound change in econometric theory in the 1930s, as recalled in Morgan’s Chapter 8 (1990)) showed, econometrics is fundamentally based on a probabilistic model, for two main reasons. First, the use of statistical quantities (or “measures”) such as means, standard errors and correlation coefficients for inferential purposes can only be justified if the process generating the data can be expressed in terms of a probabilistic model. Second, the probability approach is relatively general, and is particularly well suited to the analysis of “dependent” and “non-homogeneous” observations, as they are often found on economic data.We will then assume that there is a probabilistic space (\Omega,\mathcal{F},\mathbb{P}) such that observations (y_i,\mathbf{x}_i) are seen as realizations of random variables (Y_i, \mathbf{X}_i) . In practice, however, we are not very interested in the joint law of the couple (Y, \mathbf{X}) : the law of \mathbf{X} is unknown, and it is the law of Y conditional on \mathbf{X} that will be interested in. In the following, we will note x a single observation, \mathbf{x} a vector of observations, X a random variable, and \mathbf{X} a random vector. Abusively, \mathbf{X} may also designate the matrix of individual observations (denoted \mathbf{x}_i), depending on the context.

Foundations of mathematical statistics

As recalled in Vapnik’s (1998) introduction, inference in parametric statistics is based on the following belief: the statistician knows the problem to be analyzed well, in particular, he knows the physical law that generates the stochastic properties of the data, and the function to be found is written via a finite number of parameters[1]. To find these parameters, the maximum likelihood method is used. The purpose of the theory is to justify this approach (by discovering and describing its favorable properties). We will see that in learning, philosophy is very different, since we do not have a priori reliable information on the statistical law underlying the problem, nor even on the function we would like to approach (we will then propose methods to construct an approximation from the data at our disposal, as in (1998)). A “golden age” of parametric inference, from 1930 to 1960, laid the foundations for mathematical statistics, which can be found in all statistical textbooks, including today. As Vapnik (1998) states, the classical parametric paradigm is based on the following three beliefs:

  1. To find a functional relationship from the data, the statistician is able to define a set of functions, linear in their parameters, that contain a good approximation of the desired function. The number of parameters describing this set is small.
  2. The statistical law underlying the stochastic component of most real-life problems is the normal law. This belief has been supported by reference to the central limit theorem, which stipulates that under large conditions the sum of a large number of random variables is approximated by the normal law.
  3. The maximum likelihood method is a good tool for estimating parameters.

In this section we will come back to the construction of the econometric paradigm, directly inspired by that of classical inferential statistics.

Conditional laws and likelihood

Linear econometrics has been constructed under the assumption of individual data, which amounts to assuming independent variables (Y_i, \mathbf{X}_i) (if it is possible to imagine temporal observations – then we would have a process (Y_t, \mathbf{X}_t) – but we will not discuss time series here). More precisely, we will assume that, conditionally to the explanatory variables \mathbf{X}_i, the variables Y_i are independent. We will also assume that these conditional laws remain in the same parametric family, but that the parameter is a function of \mathbf{x}. In the Gaussian linear model it is assumed that: (Y\vert \mathbf{X}=\mathbf{x})\overset{\mathcal{L}}{\sim}\mathcal{N}(\mu(\mathbf{x}),\sigma^2)~~~~ (1)where \mu(\mathbf{x})=\beta_0+\mathbf{x}^T\mathbf{\beta} and \mathbf{\beta}\in\mathbb{R}^{p}.

It is usually called a ‘linear’ model since \mathbb{E}[Y\vert \mathbf{X}=\mathbf{x}]=\beta_0+\mathbf{x}^T\mathbf{\beta} is a linear combination of covariates[2]. It is said to be a homoscedastic model if Var[Y|\mathbf{X}=\mathbf{x}]=\sigma^2, where \sigma^2 is a positive constant. To estimate the parameters, the traditional approach is to use the Maximum Likelihood estimator, as initially suggested by Ronald Fisher. In the case of the Gaussian linear model, log-likelihood is written:  \log\mathcal{L}(\beta_0, \mathbf{\beta},\sigma^2\vert \mathbf{y},\mathbf{x}) = -\frac{n}{2}\log[2\pi\sigma^2] - \frac{1}{2\sigma^2}\sum_{i=1}^n (y_i-\beta_0-\mathbf{x}_i^T\mathbf{\beta})^2Note that the term on the right, measuring a distance between the data and the model, will be interpreted as deviance in generalized linear models. Then we will set: (\widehat{\beta}_0,\widehat{\mathbf{\beta}},\widehat{\sigma}^2)=\text{argmax}\left\lbrace\log\mathcal{L}(\beta_0, \mathbf{\beta},\sigma^2\vert \mathbf{y},\mathbf{x})\right\rbraceThe maximum likelihood estimator is obtained by minimizing the sum of the error squares (the so-called “least squares” estimator) that we will find in the “machine learning” approach.

The first order conditions allow to find the normal equations, whose matrix writing is \mathbf{X}^T[\mathbf{y}-\mathbf{X}\mathbf{\beta}]=\mathbf{0}, which can also be written (\mathbf{X}^T \mathbf{X})\mathbf{\beta}=\mathbf{X}^T \mathbf{y}. If \mathbf{X} is a full (column) rank matrix, then we find the classical estimator:\widehat{\mathbf{\beta}}=(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}=\mathbf{\beta}+(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^{-1}\mathbf{\varepsilon}~~~(2)using residual-based writing (as often in econometrics), y=\mathbf{x}^T\mathbf{\beta}+\varepsilon. Gauss Markov’s theorem ensures that this estimator is the unbiased linear estimator with minimum variance. It can then be shown that \widehat{\mathbf{\beta}}\sim\mathcal{N}(\mathbf{\beta},\sigma^2(\mathbf{X}^T\mathbf{X})^{-1}), and in particular, if we simply need the first two moments : \mathbb{E}[\widehat{\mathbf{\beta}}]=\mathbf{\beta}~~~Var[\widehat{\mathbf{\beta}}]=\sigma^2 [\mathbf{X}^T\mathbf{X}]^{-1}In fact, the normality hypothesis makes it possible to make a link with mathematical statistics, but it is possible to construct this estimator given by equation (2) without that Gaussian assumption. Hence, if we assume that Y|\mathbf{X} has the same distribution as \mathbf{x}^T\mathbf{\beta}+\varepsilon, where \mathbb{E}[\varepsilon]=0, Var[\varepsilon]=\sigma^2 and Cov[X_j,\varepsilon]=0 for all j, then \widehat{\mathbf{\beta}} is an unbiased estimator of \mathbf{\beta} with smallest variance[3] among unbiased linear estimators. Furthermore, if we cannot get normality at finite distance, asymptotically this estimator is Gaussian, with \sqrt{n}(\widehat{\mathbf{\beta}}-\mathbf{\beta})\overset{\mathcal{L}}{\rightarrow}\mathcal{N}(\mathbf{0},\mathbf{\Sigma})as n\rightarrow\infty, for some matrix \mathbf{\Sigma}.
The condition of having a full rank \mathbf{X} matrix can be (numerically) strong in large dimensions. If it is not satisfied, (\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T does not exist. If \mathbb{I} denotes the identity matrix, however, it should be noted that (\mathbf{X}^T \mathbf{X}+\lambda\mathbb{I})^{-1}\mathbf{X}^T still exists, whatever \lambda>0. This estimator is called the ridge estimator of level \lambda (introduced in the 1960s by Hoerl (1962), and associated with a regularization studied by Tikhonov (1963)). This estimator naturally appears in a Bayesian econometric context.

Residuals

It is not uncommon to introduce the linear model from the distribution of the residuals, as we mentioned earlier. Also, equation (1) is written as often: y_i=\beta_0+\mathbf{x}_i^T\mathbf{\beta}+\varepsilon_i~~~~(3)where \varepsilon_i’s are realizations of independent and identically distributed random variables (i.i.d.) from some \mathcal{N}(0,\sigma^2) distribution. With a vector notation, we will write \mathbf{\varepsilon}\overset{\mathcal{L}}{\sim}\mathcal{N}(\mathbf{0},\sigma^2\mathbb{I}) . The estimated residuals are defined as: \widehat{\varepsilon}_i =y_i-[\widehat{\beta}_0+\mathbf{x}_i^T\widehat{\mathbf{\beta}}] Those (estimated) residuals are basic tools for diagnosing the relevance of the model.

An extension of the model described by equation (1) has been proposed to take into account a possible heteroscedastic character: (Y\vert \mathbf{X}=\mathbf{x})\overset{\mathcal{L}}{\sim}\mathcal{N}(\mu(\mathbf{x}),\sigma^2(\mathbf{x}))where \sigma^2(\mathbf{x}) is a positive function of the explanatory variables. This model can be rewritten as: y_i=\beta_0+\mathbf{x}_i^T\mathbf{\beta}+\sigma^2(\mathbf{x}_i)\cdot\varepsilon_iwhere residuals are always i.i.d., with unit variance, \varepsilon_i=\frac{y_i-[\beta_0+\mathbf{x}_i^T\mathbf{\beta}]}{\sigma(\mathbf{x}_i)} While residuals based equations are popular in linear econometrics (when the dependent variable is continuous), it is no longer popular in counting models, or logistic regression.

However, writing using an error term (as in equation (3)) raises many questions about the representation of an economic relationship between two quantities. For example, it can be assumed that there is a relationship (linear to begin with) between the quantities of a traded good, q and its price p. This allows us to imagine a supply equationq_i=\beta_0+\beta_1 p_i+u_i(u_i being an error term) where the quantity sold depends on the price, but in an equally legitimate way, one can imagine that the price depends on the quantity produced (what one could call a demand equation), p_i=\alpha_0+\alpha_1 q_i+v_i(v_i denoting another error term). Historically, the error term in equation (3) could be interpreted as an idiosyncratic error on the variable y, the so-called explanatory variables being assumed to be fixed, but this interpretation often makes the link between an economic relationship and a complicated economic model difficult, the economic theory speaking abstractly about a relationship between a magnitude, the econometric model imposing a specific shape (what magnitude is y and what magnitude is x) as shown in more detail in Morgan (1990) Chapter 7.

(references mentioned above are online here). To be continued…

[1] This approach can be compared to structural econometrics, as presented for example in Kean (2010).

[2] Here, we will try to distinguish \beta_0, the intercept, and the other parameters \mathbf{\beta}, since they are considered differently in many extensions (e.g. regularization). Nevertheless, in many expressions \mathbf{\beta} will denote the joint vector (\beta_0, \mathbf{\beta}), for general formulas, to avoid too heavy notations.

[3] In the sense that the difference between variance matrices is a positive matrix.

Econometrics vs. Machine Learning with Temporal Patterns

A few months ago, I did publish a (long) post entitled ‘some thoughts on economics, mathematics, econometrics, machine learning, etc‘. In that post, I was discussing possible differences between foundations of econometrics, and machine learning. I wanted to get back today on an important point, related to training/sampling datasets, when we have temporal data.

I was discussing this morning, with a student of the Data Science for Actuaries program, an interesting point related to claim frequency models, for insurance ratemaking. Since the goal is to predict claims frequency (to assess the level of the insurance premium), he suggested to use old data to train the model, and more recent one to test it. The problem is that the model did not incorporate any temporal pattern, and we got surprising results.

Consider here a simple dataset,

> set.seed(1)
> n=50000
> X1=runif(n)
> T=sample(2000:2015,size=n,replace=TRUE)
> L=exp(-3+X1-(T-2000)/20)
> E=rbeta(n,5,1)
> Y=rpois(n,L*E)
> B=data.frame(Y,X1,L,T,E)

Claims frequency is driven by a Poisson process, with one covariate, X1, and we assume that the intensity decreases (with an exponential rate). Consider here a standard linear regression, without any time effect

> reg=glm(Y~X1+offset(log(E)),data=B,
+ family=poisson)

We can also compute the empirical annualized claims frequency

> u=seq(0,1,by=.01)
> v=predict(reg,newdata=data.frame(X1=u,E=1))
> p=function(x){
+   B=B[abs(B$X1-x)<.1,]
+   sum(B$Y)/sum(B$E)
+ }
> vp=Vectorize(p)(seq(.05,.95,by=.1))

and plot the two curves on the same graph,

> plot(seq(.05,.95,by=.1),vp,type="b")
> lines(u,exp(v),lty=2,col="red")

This is what we usually do in econometrics. In machine learning, and more specifically to assess the quality of the model, and for model selection, it is common to split the dataset in two parts. A training sample, and a validation sample. Consider some randomized training/validation samples, then fit a model on the training sample, and finally use it to get a prediction,

> idx=sample(1:nrow(B),size=nrow(B)*7/8)
> B_a=B[idx,]
> B_t=B[-idx,]
> reg=glm(Y~X1+offset(log(E)),data=B_a,
+ family=poisson)
> u=seq(0,1,by=.01)
> v=predict(reg,newdata=data.frame(X1=u,E=1))
> p=function(x){
+   B=B_a[abs(B_a$X1-x)<.1,]
+   sum(B$Y)/sum(B$E)
+ }
> vp_a=Vectorize(p)(seq(.05,.95,by=.1))
> plot(seq(.05,.95,by=.1),vp_a,col="blue")
> lines(u,exp(v),lty=2)
> p=function(x){
+   B=B_t[abs(B_t$X1-x)<.1,]
+   sum(B$Y)/sum(B$E)
+ }
> vp_t=Vectorize(p)(seq(.05,.95,by=.1))
> lines(seq(.05,.95,by=.1),vp_t,col="red")

The blue curve is the prediction on the training sample (as we usually do in econometrics), but then the red curve is the prediction on the testing sample. Here, volatility probably comes from the small size of the testing sample (1 observation out of 8).

Now, what if we use the year as a splitting criteria : we fit a model on old years to fit a model, and we test it on recent years,

> B_a=subset(B,T<2014)
> B_t=subset(B,T>=2014)
> reg=glm(Y~X1+offset(log(E)),data=B_a,family=poisson)
> u=seq(0,1,by=.01)
> v=predict(reg,newdata=data.frame(X1=u,E=1))
> p=function(x){
+   B=B_a[abs(B_a$X1-x)<.1,]
+   sum(B$Y)/sum(B$E)
+ }
> vp_a=Vectorize(p)(seq(.05,.95,by=.1))
> plot(seq(.05,.95,by=.1),vp_a,col="blue")
> lines(u,exp(v),lty=2)
> p=function(x){
+   B=B_t[abs(B_t$X1-x)<.1,]
+   sum(B$Y)/sum(B$E)
+ }
> vp_t=Vectorize(p)(seq(.05,.95,by=.1))
> lines(seq(.05,.95,by=.1),vp_t,col="red")

Clearly, we miss something here…

We were looking at such a graph this morning, and it took me some time to understand how training and validation samples were designed, and that there was a possible temporal effect (actually, this morning, it was based on a 3 year training sample, and a 1 year validation sample).

Since there is a temporal pattern, let us capture it. As an econometrician, let me use a regression model

> reg=glm(Y~X1+T+offset(log(E)),data=B,
+ family=poisson)
> C=coefficients(reg)
> u=seq(1999,2016,by=.1)
> v=exp(-(u-2000)/20-3)
> plot(2000:2015,exp(C[1]+C[3]*(2000:2015)))
> lines(u,v,lty=2,col="red")

(I focus only on the evolution of the temporal variate on that graph).

Here, we use a linear model, but there are usually no reason to assume linearity. So we might consider splines

> library(splines)
> reg=glm(Y~X1+bs(T)+offset(log(E)),
+ data=B,family=poisson)
> u=seq(1999,2016,by=.1)
> v=exp(-(u-2000)/20-3)
> v2=predict(reg,newdata=data.frame(X1=0,
+ T=2000:2015,E=1))
> plot(2000:2015,exp(v2),type="b")
> lines(u,v,lty=2,col="red")

But here again, why should we assume that there is an underlying smooth function? There might be some ruptures… So let us consider a regression on factors

> reg=glm(Y~0+X1+as.factor(T)+offset(log(E)),
+ data=B,family=poisson)
> C=coefficients(reg)
> u=seq(1999,2016,by=.1)
> v=exp(-(u-2000)/20-3)
> plot(2000:2015,exp(C[2:17]),type="b")
> lines(u,v,lty=2,col="red")

An alternative might be to consider some more general model, like a regression tree

> library(rpart)
> reg=rpart(Y~X1+T+offset(log(E)),data=B,
+ method="poisson",cp=1e-4)
> p=function(t){
+   B=B[B$T==t,]
+   B$E=1
+   mean(predict(reg,newdata=B))
+ }
> y_m=Vectorize(function(t) p(t))(2000:2015)
> u=seq(1999,2016,by=.1)
> v=exp(-(u-2000)/20-3+.5)
> plot(2000:2015,y_m,ylim=c(.02,.085),type="b")
> lines(u,v,lty=2,col="red")

Here, it seems that something went wrong. I guess it’s coming from the exposure. So consider a simplier model, on the annualized frequency, and with weights that are related to the exposure

> reg=rpart(Y/E~X1+T,data=B,weights=B$E,cp=1e-4)
> p=function(t){
+   B=B[B$T==t,]
+   B$E=1
+   mean(predict(reg,newdata=B))
+ }
> y_m=Vectorize(function(t) p(t))(2000:2015)
> u=seq(1999,2016,by=.1)
> v=exp(-(u-2000)/20-3+.5)
> plot(2000:2015,y_m,ylim=c(.02,.085),type="b")
> lines(u,v,lty=2,col="red")

That was for the econometrician perspective. With a machine learning perspective, consider a training sample (here based on old data) and a validation sample (based on more recent ones)

> B_a=subset(B,T<2014)
> B_t=subset(B,T>=2014)

If we consider a model, it is easy to get a prediction on recent years, even if the model was designed to model older ones,

> reg_a=glm(Y~X1+T+offset(log(E)),
+ data=B_a,family=poisson)
> C=coefficients(reg_a)
> u=seq(1999,2016,by=.1)
> v=exp(-(u-2000)/20-3)
> plot(2000:2015,exp(C[1]+C[3]*c(2000:2013,
+ NA,NA)),type="b")
> lines(u,v,lty=2,col="red")
> points(2014:2015,exp(C[1]+C[3]*2014:2015),
+ pch=19,col="blue")

But if we use years as factors, things are more complicated.

> reg_a=glm(Y~0+X1+as.factor(T)+offset(log(E)),
+ data=B_a,family=poisson)
> C=coefficients(reg_a)
> RMSE=function(A){
+   L=exp(C[1]*B_t$X1+ A[1]*(B_t$T==2014) + A[2]*(B_t$T==2015))
+   Y_t=L*B_t$E
+   sum( (Y_t - B_t$Y )^2)}
> i=optim(c(.4,.4),RMSE)$par
> plot(2000:2015,c(exp(C[2:15]),NA,NA),)
> u=seq(1999,2016,by=.1)
> v=exp(-(u-2000)/20-3)
> lines(u,v,lty=2,col="red")
> points(2014:2015,exp(i),pch=19,col="blue")

becase we need to get a prediction on levels that were not in our training sample. Here, we minimize the RMSE to quantify factor levels for recent years. And the output is not that bad.

So yes, it is possible to get a training dataset on older data, and test it on recent years. But one should be careful, and take into account, properly, temporal patterns.

Modèle de régression et interaction(s) entre facteurs

Dans un modèle de régression, on veut écrire

Quand on se limite à un modèle linéaire, on écrit

ou encore

Mais on de doute que l’on rate quelque chose… en particulier, on va rater toutes les interactions possibles. On peut croiser les variables, et supposer que

qui peut s’étendre d’avantage, à l’ordre 3,

voire davantage.

Supposons que nos variables  soient ici qualitatives, et plus précisément binaires. Prenons un exemple simple, avec des données (classiques) en risque de crédit1. On peut trouver la base via

library(evtree)
db=GermanCredit

ou encore directement

myVariableNames = c("checking_status","duration","credit_history",
"purpose","credit_amount","savings","employment","installment_rate",
"personal_status","other_parties","residence_since","property_magnitude",
"age","other_payment_plans","housing","existing_credits","job",
"num_dependents","telephone","foreign_worker","class")

GermanCredit = read.table(
"http://archive.ics.uci.edu/ml/machine-learning-databases/statlog/german/german.data",
header=FALSE,col.names=myVariableNames)

Retenons pour commencer trois variables explicatives,

db=data.frame(Y=GermanCredit$class-1,
X1=GermanCredit$checking_status%in%c("A12","A13"),
X2=GermanCredit$credit_history%in%c("A30","A31"),
X3=GermanCredit$savings%in%c("A61","A62"))
reg=glm(Y~X1+X2+X3,data=db,family=binomial)
summary(reg)

La régression sans interaction donne ici

Call:
glm(formula = Y ~ X1 + X2 + X3, family = binomial, data = db)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.5431  -0.8421  -0.6295   1.3994   1.9999  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept)  -1.8544     0.1699 -10.915  < 2e-16 ***
X1TRUE        0.3363     0.1496   2.249   0.0245 *  
X2TRUE        1.3462     0.2347   5.735 9.76e-09 ***
X3TRUE        1.0001     0.1787   5.596 2.19e-08 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 1221.7  on 999  degrees of freedom
Residual deviance: 1143.6  on 996  degrees of freedom
AIC: 1151.6

Number of Fisher Scoring iterations: 4

Il existe plusieurs interactions possibles ici (limitons nous aux paires). C’est ce que l’on observe quand on fait la régression

reg=glm(Y~X1+X2+X3+X1:X2+X1:X3+X2:X3,data=db,family=binomial)
summary(reg)

Call:
glm(formula = Y ~ X1 + X2 + X3 + X1:X2 + X1:X3 + X2:X3, family = binomial, 
    data = db)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.5369  -0.8281  -0.6439   1.3954   1.9638  

Coefficients:
              Estimate Std. Error z value Pr(>|z|)    
(Intercept)   -1.77109    0.20070  -8.825  < 2e-16 ***
X1TRUE         0.30296    0.33737   0.898 0.369186    
X2TRUE         0.88353    0.54255   1.628 0.103421    
X3TRUE         0.87709    0.22583   3.884 0.000103 ***
X1TRUE:X2TRUE -0.37917    0.49343  -0.768 0.442225    
X1TRUE:X3TRUE  0.09178    0.37278   0.246 0.805522    
X2TRUE:X3TRUE  0.80923    0.58185   1.391 0.164293    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 1221.7  on 999  degrees of freedom
Residual deviance: 1141.0  on 993  degrees of freedom
AIC: 1155

Number of Fisher Scoring iterations: 4

On peut faire un dessin pour visualiser les interactions : on a trois sommets (nos trois variables), et on visualiser les interactions

indices=cbind(c(1,2,3),c(1,1,2),c(2,3,3))
k=3
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",
xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

ce qui donne ici, pour nos trois variables

Ce modèle pourrait sembler incomplet, car on ne regarde que les interactions entre les modalités, par paires. En fait, c’est parce qu’il manque (visuellement) les variables non-croisées. On peut les rajouter si on veut (au risque de surcharger le dessin)

cercle=function(c,r,cl) lines(c[1]+r*cos(seq(0,2*pi,length=501)),
c[2]+r*sin(seq(0,2*pi,length=501)),col=cl)

reg=glm(Y~X1+X2+X3+X1:X2+X1:X3+X2:X3,data=db,family=binomial)
indices=cbind(c(1,2,3),c(1,1,2),c(2,3,3))
k=3
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}
for(i in 1:k){
cercle(c(cos(theta)[i]*1.18,sin(theta)[i]*1.18),.18,"grey")
text(cos(theta)[i]*1.35,sin(theta)[i]*1.35,
trunc(10000*coefficients(reg)[1+i])/10000)
}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

soit ici

Si on change le ‘sens‘ de nos variables (en recodant a l’envers, en permutant les vrais et les faux), on obtient le graphique suivant

dbinv=db
dbinv[,2:k]=1-dbinv[,2:k]
reg=glm(Y~X1+X2+X3+X1:X2+X1:X3+X2:X3,data=dbinv,family=binomial)
indices=cbind(c(1,2,3),c(1,1,2),c(2,3,3))
k=3
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}
for(i in 1:k){
cercle(c(cos(theta)[i]*1.18,sin(theta)[i]*1.18),.18,"grey")
text(cos(theta)[i]*1.35,sin(theta)[i]*1.35,
trunc(10000*coefficients(reg)[1+i])/10000)
}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

qui peut alors être comparé au graphique précédant

Avec 5 variables, on augmente les interactions possibles… même si beaucoup risquent d’être non-significatifs. On peut déjà se focaliser sur les paires possibles d’interactions croisées. Pour simplifier le code, on va utiliser deux fonctions locales,

vrepeach=function(x,e){
v=NULL
for(i in 1:length(e)){v=c(v,rep(x[i],each=e[i]))}
return(v)}
vreplength=function(x,l){
v=NULL
for(i in 1:length(l)){v=c(v,x[l[i]:length(x)])}
return(v)}

et ensuite, on adapte le code précédant

indices=cbind(1:(k*(k-1)/2),vrepeach(1:(k-1),(k-1):1),vreplength(2:k,1:(k-1)))
formule="Y~1"
for(i in 1:k) formule=paste(formule,"+X",i,sep="")
for(i in 1:nrow(indices)) formule=paste(formule,"+X",indices[i,2],":X",indices[i,3],sep="")
reg=glm(formule,data=db,family=binomial)
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}
for(i in 1:k){
cercle(c(cos(theta)[i]*1.18,sin(theta)[i]*1.18),.18,"grey")
text(cos(theta)[i]*1.35,sin(theta)[i]*1.35,
trunc(10000*coefficients(reg)[1+i])/10000)
}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

ce qui donne un schéma plus complexe,

On peut aussi prendre juste 2 variables, prenant 3 et 4 modalités respectivement. On va extraire deux variables indicatrices pour la première (la modalité restante sera la modalité de référence) et trois pour la seconde,

db=data.frame(Y=GermanCredit$class-1,
X1=GermanCredit$checking_status=="A12",
X2=GermanCredit$checking_status=="A13",
X3=GermanCredit$checking_status=="A14",
X4=GermanCredit$employment%in%c("A72","A73"),
X5=GermanCredit$employment%in%c("A74","A75"))
k=5
indices=cbind(1:(k*(k-1)/2),vrepeach(1:(k-1),(k-1):1),vreplength(2:k,1:(k-1)))
formule="Y~1"
for(i in 1:k) formule=paste(formule,"+X",i,sep="")
for(i in 1:nrow(indices)) formule=paste(formule,"+X",indices[i,2],":X",indices[i,3],sep="")
reg=glm(formule,data=db,family=binomial)
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
if(!is.na(coefficients(reg)[1+k+i])){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}}
for(i in 1:k){
cercle(c(cos(theta)[i]*1.18,sin(theta)[i]*1.18),.18,"grey")
text(cos(theta)[i]*1.35,sin(theta)[i]*1.35,
trunc(10000*coefficients(reg)[1+i])/10000)
}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

On voit que plusieurs interactions ne sont alors plus possibles, sur la partie gauche (les trois modalités de la même variable) et sur la partie droite

On peut d’ailleurs simplifier les graphs, en ne visualisant que les interactions significatives.

indices=cbind(1:(k*(k-1)/2),vrepeach(1:(k-1),(k-1):1),vreplength(2:k,1:(k-1)))
formule="Y~1"
for(i in 1:k) formule=paste(formule,"+X",i,sep="")
for(i in 1:nrow(indices)) formule=paste(formule,"+X",indices[i,2],":X",indices[i,3],sep="")
reg=glm(formule,data=db,family=binomial)
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
if(!is.na(coefficients(reg)[1+k+i])){
if(summary(reg)$coefficients[1+k+i,4]<.1){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}}}
for(i in 1:k){
if(summary(reg)$coefficients[1+i]<.1){
cercle(c(cos(theta)[i]*1.18,sin(theta)[i]*1.18),.18,"grey")
text(cos(theta)[i]*1.35,sin(theta)[i]*1.35,
trunc(10000*coefficients(reg)[1+i])/10000)
}}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

soit ici

Ici, une seule interactions croisée est significative, et presque toutes les variables le sont. Et si on reprend le modèle avec 5 facteurs,

db=data.frame(Y=GermanCredit$class-1,X1=GermanCredit$checking_status%in%c("A12","A13"),
X2=GermanCredit$credit_history%in%c("A30","A31"),
X3=GermanCredit$savings%in%c("A61","A62"),
X4=GermanCredit$employment%in%c("A71","A72"),
X5=GermanCredit$other_payment_plans=="A143")

indices=cbind(1:(k*(k-1)/2),vrepeach(1:(k-1),(k-1):1),vreplength(2:k,1:(k-1)))
formule="Y~1"
for(i in 1:k) formule=paste(formule,"+X",i,sep="")
for(i in 1:nrow(indices)) formule=paste(formule,"+X",indices[i,2],":X",indices[i,3],sep="")
reg=glm(formule,data=db,family=binomial)
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
if(!is.na(coefficients(reg)[1+k+i])){
if(summary(reg)$coefficients[1+k+i,4]<.1){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}}}
for(i in 1:k){
if(summary(reg)$coefficients[1+i]<.1){
cercle(c(cos(theta)[i]*1.18,sin(theta)[i]*1.18),.18,"grey")
text(cos(theta)[i]*1.35,sin(theta)[i]*1.35,
trunc(10000*coefficients(reg)[1+i])/10000)
}}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

on obtient

Je ne sais pas si mes graphiques sont pertinents, ou pas. Mais je trouve ça joli. En fait, je suis tombé un peu par hasard2 sur les Tables de Taguchi, développées par Gen’ichi Taguchi (田口 玄一). Le soucis est que je n’ai rien compris… Enfin, disons que je croyais comprendre, puis j’ai continué à faire des dessins… Si quelqu’un pourrait m’expliquer sur mon exemple les graphiques de Taguchi, je suis preneur ! car je doute que ce soit ce que je fais depuis tout à l’heure…

1. Cette base est largement utilisée dans le quatrième chapitre de Computational Actuarial Science with R, à paraître dans les mois à venir.

2.En l’occurence, le hasard est @Benavent qui a suscité ma curiosité ce matin en me parlant de ces tables, dont je n’avais alors jamais entendu parlé ! J’avais même lu rapidement Taniguchi (谷口 ジロー) et je ne voyais pas le rapport avec les statistiques….

Non-observable vs. observable heterogeneity factor

This morning, in the ACT2040 class (on non-life insurance), we’ve discussed the difference between observable and non-observable heterogeneity in ratemaking (from an economic perspective). To illustrate that point (we will spend more time, later on, discussing observable and non-observable risk factors), we looked at the following simple example. Let  denote the height of a person. Consider the following dataset

> Davis=read.table(
+ "http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")

There is a small typo in the dataset, so let us make manual changes here

> Davis[12,c(2,3)]=Davis[12,c(3,2)] 

Here, the variable of interest is the height of a given person,

> X=Davis$height 

If we look at the histogram, we have

> hist(X,col="light green", border="white",proba=TRUE,xlab="",main="")

Can we assume that we have a Gaussian distribution ?

Maybe not… Here, if we fit a Gaussian distribution, plot it, and add a kernel based estimator, we get

> (param <- fitdistr(X,"normal")$estimate) 
> f1 <- function(x) dnorm(x,param[1],param[2]) 
> x=seq(100,210,by=.2) 
> lines(x,f1(x),lty=2,col="red") 
> lines(density(X))

 

If you look at that black line, you might think of a mixture, i.e. something like

(using standard mixture notations). Mixture are obtained when we have a non-observable heterogeneity factor: with probability , we have a random variable  (call it type [1]), and with probability , a random variable  (call it type [2]). So far, nothing new. And we can fit such a mixture distribution, using e.g.


> library(mixtools) 
> mix <- normalmixEM(X)
 number of iterations= 335 
> (param12 <- c(mix$lambda[1],mix$mu,mix$sigma)) 
[1] 0.4002202 178.4997298 165.2703616 6.3561363 5.9460023  

If we plot that mixture of two Gaussian distributions, we get

> f2 <- function(x){ param12[1]*dnorm(x,param12[2],param12[4])
+ (1-param12[1])*dnorm(x,param12[3],param12[5]) }
> lines(x,f2(x),lwd=2, col="red") lines(density(X))

Not bad. Actually, we can try to maximize the likelihood with our own codes,

> logdf <- function(x,parameter){
+ p <- parameter[1]
+ m1 <- parameter[2]
+ s1 <- parameter[4]
+ m2 <- parameter[3]
+ s2 <- parameter[5]
+ return(log(p*dnorm(x,m1,s1)+(1-p)*dnorm(x,m2,s2)))
+ }
> logL <- function(parameter) -sum(logdf(X,parameter))
> Amat <- matrix(c(1,-1,0,0,0,0,
+ 0,0,0,0,1,0,0,0,0,0,0,0,0,1), 4, 5)
> bvec <- c(0,-1,0,0)
> constrOptim(c(.5,160,180,10,10), logL, NULL, ui = Amat, ci = bvec)$par

[1]   0.5996263 165.2690084 178.4991624   5.9447675   6.3564746

Here, we include some constraints, to insurance that the probability belongs to the unit interval, and that the variance parameters remain positive. Note that we have something close to the previous output.

Let us try something a little bit more complex now. What if we assume that the underlying distributions have the same variance, namely

In that case, we have to use the previous code, and make small changes,

> logdf <- function(x,parameter){
+ p <- parameter[1]
+ m1 <- parameter[2]
+ s1 <- parameter[4]
+ m2 <- parameter[3]
+ s2 <- parameter[4]
+ return(log(p*dnorm(x,m1,s1)+(1-p)*dnorm(x,m2,s2)))
+ }
> logL <- function(parameter) -sum(logdf(X,parameter))
> Amat <- matrix(c(1,-1,0,0,0,0,0,0,0,0,0,1), 3, 4)
> bvec <- c(0,-1,0)
> (param12c= constrOptim(c(.5,160,180,10), logL, NULL, ui = Amat, ci = bvec)$par)

[1]   0.6319105 165.6142824 179.0623954   6.1072614

This is what we can do if we cannot observe the heterogeneity factor. But wait… we actually have some information in the dataset. For instance, we have the sex of the person. Now, if we look at histograms of height per sex, and kernel based density estimator of the height, per sex, we have

So, it looks like the height for male, and the height for female are different. Maybe we can use that variable, that was actually observed, to explain the heterogeneity in our sample. Formally, here, the idea is to consider a mixture, with an observable heterogeneity factor: the sex,

We now have interpretation of what we used to call class [1] and [2] previously: male and female. And here, estimating parameters is quite simple,

>  (pM <- mean(sex=="M"))
[1] 0.44
>  (paramF <- fitdistr(X[sex=="F"],"normal")$estimate)
      mean         sd 
164.714286   5.633808 
>  (paramM <- fitdistr(X[sex=="M"],"normal")$estimate)
      mean         sd 
178.011364   6.404001

And if we plot the density, we have

> f4 <- function(x) pM*dnorm(x,paramM[1],paramM[2])+(1-pM)*dnorm(x,paramF[1],paramF[2])
> lines(x,f4(x),lwd=3,col="blue")

What if, once again, we assume identical variance? Namely, the model becomes

Then a natural idea to derive an estimator for the variance, based on previous computations, is to use

The code is here

> s=sqrt((sum((height[sex=="M"]-paramM[1])^2)+sum((height[sex=="F"]-paramF[1])^2))/(nrow(Davis)-2))
> s
[1] 6.015068

and again, it is possible to plot the associated density,

> f5 <- function(x) pM*dnorm(x,paramM[1],s)+(1-pM)*dnorm(x,paramF[1],s)
> lines(x,f5(x),lwd=3,col="blue")

Now, if we think a little about what we’ve just done, it is simply a linear regression on a factor, the sex of the person,

where .  And indeed, if we run the code to estimate this linear model,

> summary(lm(height~sex,data=Davis))

Call:
lm(formula = height ~ sex, data = Davis)

Residuals:
     Min       1Q   Median       3Q      Max 
-16.7143  -3.7143  -0.0114   4.2857  18.9886 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 164.7143     0.5684  289.80   <2e-16 ***
sexM         13.2971     0.8569   15.52   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 6.015 on 198 degrees of freedom
Multiple R-squared:  0.5488,	Adjusted R-squared:  0.5465 
F-statistic: 240.8 on 1 and 198 DF,  p-value: < 2.2e-16

we get the same estimators for the means and the variance as the ones obtained previously. So, as mentioned this morning in class, if you have a non-observable heterogeneity factor, we can use a mixture model to fit a distribution, but if you can get a proxy of that factor, that is observable, then you can run a regression. But most of the time, that observable variable is just a proxy of a non-observable one…

Propriétés des estimateurs dans une régression

Comme les rappels (je devrais plutôt dire “remise à niveau“) ont été particulièrement rapides, je vais prendre un peu de temps pour revenir sur quelques éléments du cours d’économétrie (d’autant plus que j’ai reçu quelques questions par mail).

  • sur le calcul des estimateurs

Je vais reprendre les questions que j’ai reçu, ça sera plus simple, “je comprend pas très bien quelle méthode utilise l’opération lm(Y~X…..) sur R pour trouver les coef” ou encore “je regresse un modèle simple, je sors le summary, et j’obtiens un betaREG 3 étoiles. Toujours sur R, je compare ce betaREG avec le betaCalc = (X’X)-1 * X’Y, il est différent ! Comment expliquer cela ? Est-ce possible ?“.
Commençons par la première question: la fonction lm() utilise la méthode dite des moindres carrés, ce qui revient à calculer

en adoptant une écriture matricielle, i.e. la solution est alors

Cet estimateur est celui qui minimise la somme des carrés des erreurs. Et on peut vérifier numériquement que cet estimateur est bien celui calculé par R,

>data(cars)
>X=cbind(rep(1,nrow(cars)),cars$speed); Y=cars$dist
> X[1:5,]
      [,1] [,2]
 [1,]    1    4
 [2,]    1    4
 [3,]    1    7
 [4,]    1    7
 [5,]    1    8
> solve(t(X)%*%X)
            [,1]         [,2]
[1,]  0.19310949 -0.011240876
[2,] -0.01124088  0.000729927
> t(X)%*%Y
      [,1]
[1,]  2149
[2,] 38482
> (solve(t(X)%*%X))%*%(t(X)%*%Y)
           [,1]
[1,] -17.579095
[2,]   3.932409

ce qui correspond très précisément à ce que calcule la fonction de R

> lm(dist~speed,cars)
Coefficients:
(Intercept)        speed 
    -17.579        3.932

Et on peut aussi vérifier qu’il minimise bien la somme des carrés des erreurs,

> b0=seq(-30,10,by=3);
> b1=seq(1,7,by=.5)
> SC=matrix(NA,length(b0),length(b1))
> for(i in 1:length(b0)){
> for(j in 1:length(b1)){
 + SC[i,j]=sum((cars$dist-(b0[i]+b1[j]*cars$speed))^2)
}}
> contour(b0,b1,SC)

et je peux certifier que la valeur obtenue est bien le minimum de cette fonction (en fait c’est un résultat théorique d’optimisation).
Ca se généralise bien entendu en dimension plus grande. Alors attention, cette formule permet de calculer un estimateur du vecteur des paramètres, mais la formule est bien sûre fausse si on raisonne composante par composante.

> X=cars$speed; Y=cars$dist
> (solve(t(X)%*%X))%*%(t(X)%*%Y)
         [,1]
[1,] 2.909132

Mais ça c’est un résultat (trivial) d’algèbre linéaire. En effet, le fait d’avoir

ne signifie pas que l’on ait ce genre de formule composante par composante, i.e.

Dit de manière un peu formelle, le produit matriciel n’est pas un produit terme à terme. La traduction économétrique de cette idée est qu’une régression multiple n’est pas une succession de régressions simples. En effet, dans l’équation précédante, le dernier terme correspond à la régression sans la constante, comme on peut le voir ci-dessous

> lm(dist~0+speed,cars)
Coefficients:
speed  
2.909

et le premier est tout simplement la moyenne empirique,

> lm(dist~1,cars)
Coefficients:
(Intercept)  
      42.98  
> mean(cars$dist)
[1] 42.98
> X=rep(1,nrow(cars)); Y=cars$dist
> (solve(t(X)%*%X))%*%(t(X)%*%Y)
      [,1]
[1,] 42.98
  • sur les propriétés des estimateurs

Une question était “doit-on vérifier les propriétés de l’estimateur (sans biais, efficace etc…) du summary de la Reg ? pour chaque régression ?
Bon, la réponse est non car c’est impossible ! Pour répéter tranquillement ce que j’avais dit plusieurs fois oralement: quand on fait un modèle (théorique), on part d’hypothèses (par exemple ici les résidus sont centrées, de variance constante). Sous ces hypothèses on a des propriétés: ce sont des théorèmes, autrement dit de la théorie. Le plus connu étant le théorème de Gauss-Markov sur le modèle linéaire, qui garantie que l’estimateur par moindre carré est BLUE (best linear unbiased estimator).
Autrement dit (si je retraduit ce que dit ce théorème), si les hypothèses sont valides, alors la théorie nous garantie que les estimateurs vérifient des propriétés, dont celle d’être sans biais, par exemple.
On ne
peut pas
vérifier qu’un estimateur est sans biais, on peut juste vérifier ex post que les hypothèses sont valides (ou non), ce qui garantit (ou pas) l’absence de biais.
Mais comme on va le voir par la suite, le biais ce n’est pas forcément gênant, ce qui est important, c’est surtout la convergence. Comme le notait Clive Granger  “if you can’t even get a consistant estimator, you shouldn’t be in this business“.

  • et sur des points plus théoriques

sinon la personne qui avait des soucis numériques à calculer les coefficients me demandait si cela avait un “rapport avec les plim X’E/n et plim (X’X)-1/ n“.
Pour revenir sur ce point, une hypothèse forte dans le modèle linéaire (de base) est que le bruit soit soit un vrai bruit, c’est à dire non corrélé avec la variable explicative, i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/latex-reponse-mail-1.png

En effet, sinon

https://perso.univ-rennes1.fr/arthur.charpentier/latex/latex-reponse-mail-2.png

autrement dit l’estimateur par moindre carré est biasé. La solution la plus naturelle pour éviter ce “problème” est d’utiliser un instrument (ou une variable instrumentale).
Mais surtout, sur cet exemple, on voit qu’on ne pourra avoir la convergence de notre estimateur par moindres carrés que si la corrélation entre le bruit et la variable explicative tend vers 0 asymptotiquement.
Il existe en effet une différence fondamentale entre le biais et la consistance (ou la convergence vers la vraie valeur).

  • un estimateur sans biais est généralement convergent (et de manière consistante)
  • un estimateur biaisé peut ne pas converger

Prenons un petit exemple pour illustrer ces points. On supposera n1 < n2 < n3par la suite. Comme je l’ai expliqué auparavant, la distribution de l’estimateur est une propriété théorique qui ne peut se voir sur un jeu de données (à moins de faire des simulations mais ça sort du cadre de mon billet d’aujourd’hui). La figure ci-dessous correspond au cas sans biais, et convergent

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso2/.graph-estimateur-sans-biais-convergent_m.jpg

La figure ci-dessous correspond au cas biaisé, mais convergent, et consistant, au sens où asymptotiquement, l’estimateur sera sans biais.

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso2/.graph-estimateur-biais-convergent_m.jpg

Enfin, un dernier cas correspondant au cas biaisé, convergent, mais non consistant. Asymptotiquement l’estimateur sera toujours biaisé,
https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso2/.graph-estimateur-biais-non-convergent_m.jpg

Mais revenons un peu sur la formaliation sous-jacente au modèle linéaire. Les propriétés sur les résidus sont conditionnelles aux variables explicatives, en particulier.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/latex-reponse-mail-3.png
https://perso.univ-rennes1.fr/arthur.charpentier/latex/latex-reponse-mail-4.png

(c’est l’hypothèse d’homoscédasticité et d’indépendance des résidus). Bon, il faut aussi l’absence de relation linéaire entre les variables explicatives, ce qui se traduit parfois comme une absence de collinéarité.
On peut s’intéresser aux propriétés asymptotiques de l’estimateur. Pour cela, il faut peut être rappeler ce que signifie la convergence pour une variable aléatoire (en statistique mathématique, les estimateurs sont vus comme des variables aléatoires). On dira que https://perso.univ-rennes1.fr/arthur.charpentier/latex/latex-reponse-mail-5.png converge en probabilité vers https://perso.univ-rennes1.fr/arthur.charpentier/latex/latex-reponse-mail-6.png, parfois noté

https://perso.univ-rennes1.fr/arthur.charpentier/latex/latex-reponse-mail-7.png

si

https://perso.univ-rennes1.fr/arthur.charpentier/latex/latex-reponse-mail-8.png

Par exemple la loi des grands nombres garantie que, pour un échantillon i.i.d.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/latex-reponse-mail-9.png

si l’espérance des variables est finie, alors

https://perso.univ-rennes1.fr/arthur.charpentier/latex/latex-reponse-mail-10.png

Alors sous les hypothèses mentionnées auparavant, on peut montrer que, dans le modèle linéaire,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/latex-reponse-mail-11.png

On parlera alors d’estimateur convergent. On peut aussi montrer que

https://perso.univ-rennes1.fr/arthur.charpentier/latex/latex-reponse-mail-12.png

Mais formellement,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/latex-reponse-mail-13.png

si

https://perso.univ-rennes1.fr/arthur.charpentier/latex/latex-reponse-mail-14.png

Voilà en gros quelques éléments pour répondre aux questions, mais tout ça sera repris en détails dans le cours d’économétrie 1 qui commencera très bientôt… et surtout d’économétrie 2 qui abordera les aspects dynamiques.