Category Archives: Publications

Foundations of Machine Learning, part 5

This post is the nineth (and probably last) one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 8 is online here.

Optimization and algorithmic aspects

In econometrics, (numerical) optimization became omnipresent as soon as we left the Gaussian model. We briefly mentioned it in the section on the exponential family, and the use of the Fisher score (gradient descent) to solve the first order condition \mathbf{X}^T W(\beta)^{-1})[y-\widehat{y}]=\mathbf{0}. In learning, optimization is the central tool. And it is necessary to have effective optimization algorithms, to solve problems (described previously) of the form: \widehat{\beta}\in\underset{\beta\in\mathbb{R}^p}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta)+\lambda\Vert\boldsymbol{\beta}\Vert\right\rbraceIn some cases, instead of global optimization, it is sufficient to consider optimization by coordinates (widely studied in Daubechies et al. (2004)). If f:\mathbb{R}^d\rightarrow\mathbf{R} is convex and differentiable, if \mathbf{x} satisfies f(\mathbf{x}+h\boldsymbol{e}_i)\geq f(\mathbf{x}) for any h>0 and i\in\{1,\cdots, d\}then f(\mathbf{x})=\min\{f\}, where \mathbf{e}=(\mathbf{e}_i) is the canonical basis of \mathbb{R}^d. However, this property is not true in the non-differentiable case. But if we assume that the non-differentiable part is separable (additively), it becomes true again. More specifically, iff(\mathbf{x})=g(\mathbf{x})+\sum_{i=1}^d h_i(x_i)with\left\lbrace\begin{array}{l}g: \mathbb{R}^d\rightarrow\mathbb{R}\text{ convex-differentiable}\\h_i: \mathbb{R}\rightarrow\mathbb{R}\text{ convex}\end{array}\right.This was the case for Lasso regression, \beta)\mapsto\| \mathbf{y}-\beta_0-\mathbf{X}\beta\|_{\ell_2 }+\lambda\|\beta\|_{\ell_1}, as shown by Tsen (2001). Getting back to our initial notations, we can use a coordinate descent algorithm: from an initial value \mathbf{x}^{(0)}, we consider (by iterating)x_j^{(k)}\in\text{argmin}\big\lbrace f(x_1^{(k)},\cdots,x_{k-1}^{(k)},x_k,x_{k+1}^{(k-1)},\cdots,x_n^{(k-1)})\big\rbrace for j=1,2,\cdots,nThese algorithmic problems and numerical issues may seem secondary to econometricians. However, they are essential in automatic learning: a technique is interesting if there is a stable and fast algorithm, which allows to obtain a solution. These optimization techniques can be transposed: for example, this coordinate descent technique can be used in the case of SVM methods (known as “vector support” methods) when the space is not linearly separable, and the classification error must be penalized (we will come back to this technique in the next section).

In-sample, out-of-sample and cross-validation

These techniques seem intellectually interesting, but we have not yet discussed the choice of the penalty parameter \lambda. But this problem is actually more general, because comparing two parameters \widehat{\beta}_{\lambda_1} and \widehat{\beta}_{\lambda_2} is actually comparing two models. In particular, if we use a Lasso method, with different thresholds \lambda, we compare models that do not have the same dimension. Previously, we have addressed the problem of model comparison from an econometric perspective (by penalizing overly complex models). In the learning literature, judging the quality of a model on the data used to construct it does not make it possible to know how the model will behave on new data. This is the so-called “generalization” problem. The traditional approach then consists in separating the sample (size n) into two parts: a part that will be used to train the model (the training database, in-sample, size m) and a part that will be used to test the model (the testing database, out-of-sample, size n-m). The latter then makes it possible to measure a real predictive risk. Suppose that the data are generated by a linear model y_i=\mathbf{x}_i^T \beta_0+\varepsilon_i where \varepsilon_i are independent and centred law achievements. The empirical quadratic risk in-sample is here\frac{1}{m}\sum_{i=1}^m\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big)=\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big),for any observation i. Assuming the residuals \varepsilon Gaussian, then we can show that this risk is worth \sigma^2 \text{trace} (\Pi_X)/m is \sigma^2 p/m. On the other hand, the empirical out-of-sample quadratic risk is here \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) where \mathbf{x} is a new observation, independent of the others. It can be noted that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big\vert \mathbf{x}\big)=\text{Var}\big(\mathbf{x}^T \widehat{\beta}\big\vert \mathbf{x}\big)=\sigma^2\mathbf{x}^T(\mathbf{x}^T\mathbf{x})^{-1}\mathbf{x},and by integrating with respect to \mathbf{x}, \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T\beta_0]^2\big)=\sigma^2\text{trace}\big(\mathbb{E}[\mathbf{x}\mathbf{x}^T]\mathbb{E}\big[(\mathbf{x}^T\mathbf{x})^{-1}\big]\big).The expression is then different from that obtained in-sample, and using the Groves & Rothenberg (1969) increase, we can show that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) \geq \sigma^2\frac{p}{m}which is pretty intuitive, when we start thinking about it. Except in some simple cases, there is no simple (explicit) formula. Note, however, that if \mathbf{X}\sim\mathcal{N}(0,\sigma^2 \mathbb{I}), then \mathbf{x}^T \mathbf{x} follows a Wishart law, and it can be shown that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big)=\sigma^2\frac{p}{m-p-1}.If we now look at the empirical version: if \widehat{\beta} is estimated on the first m observations,\widehat{\mathcal{R}}^{~\text{ IS}}=\sum_{i=1}^m [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2\text{ and }\widehat{\mathcal{R}}^{\text{ OS}}=\sum_{i=m+1}^{n} [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2and as Leeb (2008) noted, \widehat{\mathcal{R}}^{\text{IS}}-\widehat{\mathcal{R}}^{\text{OS}}\approx 2\cdot\nu where \nu represents the number of degrees of freedom, which is not unlike the penalty used in the Akaike test.

Figure 4 shows the respective evolution of \widehat{\mathcal{R}}^{\text{IS}} and \widehat{\mathcal{R}}^{\text{OS}} according to the complexity of the model (number of degrees in a polynomial regression, number of nodes in splines, etc). The more complex the model, the more \widehat{\mathcal{R}}^{\text{IS}} will decrease (this is the red curve, below). But that’s not what we’re interested in here: we want a model that predicts well on new data (i. e. out-of-sample). As Figure 4 shows, if the model is too simple, it does not predict well (as it does with in-sample data). But what we can see is that if the model is too complex, we are in a situation of “overlearning”: the model will start to model the noise. Of course, this figure should remind us of the one we’ve seen in our second post of that series

Figure 4 : Generalization, under- and over-fitting

Instead of splitting the database in two, with some of the data that will be used to calibrate the model and some to study its performance, it is also possible to use cross-validation. To present the general idea, we can go back to the “jackknife”, introduced by Quenouille (1949) (and formalized by Quenouille (1956) and Tukey (1958)) relatively used in statistics to reduce bias. Indeed, if we assume that \{y_1,\cdots,y_n\} is a sample drawn according to a law F_\theta, and that we have an estimator T_n (\mathbf{y})=T_n (y_1,\cdots,y_n), but that this estimator is biased, with \mathbf{E}[T_n (\mathbf{Y})]=\theta+O(n^{-1}), it is possible to reduce the bias by considering \widetilde{T}_n(\mathbf{y})=\frac{1}{n}\sum_{i=1}^n T_{n-1}(\mathbf{y}_{(i)})\text{ where }\mathbf{y}_{(i)}=(y_1,\cdots,y_{i-1},y_{i+1},\cdots,y_n)It can then be shown that \mathbb{E}[\tilde{T}_n(Y)]=\theta+O(n^{-2})The idea of cross-validation is based on the idea of building an estimator by removing an observation. Since we want to build a predictive model, we will compare the forecast obtained with the estimated model, and the missing observation\widehat{\mathcal{R}}^{\text{ CV}}=\frac{1}{n}\sum_{i=1}^n \ell(y_i,\widehat{m}_{(i)}(\mathbf{x}_i))We will speak here of the “leave-one-out” (loocv) method.

This technique reminds us of the traditional method used to find the optimal parameter in exponential smoothing methods for time series. In simple smoothing, we will construct a forecast from a time series as {}_t\widehat{y}_{t+1} =\alpha\cdot{}_{t-1}\widehat{y}_t +(1-\alpha)\cdot y_t, where \alpha\in[0,1], and we will consider as “optimal” \alpha^\star = \underset{\alpha\in[0,1]}{\text{argmin}}\left\lbrace \sum_{t=2}^T \ell({}_{t-1}\widehat{y}_{t},y_{t}) \right\rbraceas described by Hyndman et al (2009).

The main problem with the leave-one-out method is that it requires calibration of n models, which can be problematic in large dimensions. An alternative method is cross validation by k-blocks (called “k-fold cross validation”) which consists in using a partition of \{1,\cdots,n\} in k groups (or blocks) of the same size, \mathcal{I}_1,\cdots,\mathcal{I}_k, and let us note \mathcal{I}_{\bar j}=\{1,\cdots,n\}\setminus \mathcal{I}_j. By noting \widehat{m}_{(j)} built on the sample \mathcal{I}_{\bar j}, we then set:\widehat{\mathcal{R}}^{k-\text{ CV}}=\frac{1}{k}\sum_{j=1}^k \mathcal{R}_j\text{ where }\mathcal{R}_j=\frac{k}{n}\sum_{i\in\mathcal{I}_{{j}}} \ell(y_i,\widehat{m}_{(j)}(\mathbf{x}_i))Standard cross-validation, where only one observation is removed each time (loocv), is a special case, with k=n. Using k=5 or 10 has a double advantage over k=n: (1) the number of estimates to be made is much smaller, 5 or 10 rather than n; (2) the samples used for estimation are less similar and therefore less correlated to each other, which tends to avoid excess variance, as recalled by James et al. (2013).

Another alternative is to use boosted samples. Let \mathcal{I}_b be a sample of size n obtained by drawing with replacement in \{1,\cdots,n\} to know which observations (y_i,\mathbf{x}_i) will be kept in the learning population (at each draw). Note \mathcal{I}_{\bar b}=\{1,\cdots,n\}\setminus\mathcal{I}_b. By noting \widehat{m}_{(b)} built on sample \mathcal{I}_b, we then set :\widehat{\mathcal{R}}^{\text{ B}}=\frac{1}{B}\sum_{b=1}^B \mathcal{R}_b\text{ where }\mathcal{R}_b=\frac{n_{\overline{b}}}{n}\sum_{i\in\mathcal{I}_{\overline{b}}} \ell(y_i,\widehat{m}_{(b)}(\mathbf{x}_i))where n_{\bar b} is the number of observations that have not been kept in \mathcal{I}_b. It should be noted that with this technique, on average e^{-1}\sim36.7\% of the observations do not appear in the boosted sample, and we find an order of magnitude of the proportions used when creating a calibration sample, and a test sample. In fact, as Stone (1977) had shown, the minimization of AIC is to be compared to the cross-validation criterion, and Shao (1997) showed that the minimization of BIC corresponds to k-fold cross-validation, with k=n/\log n.

All those techniques here are mentioned in the “machine learning” section since they rely on automatic, computational techniques, and no probabilistic foundations are necessary. In many cases we did use the notation m^\star (at least in the first posts on “machine learning” techniques) to highlight the fact that we want some sort of “optimal” model – and to make a distinction with estimators \widehat{m} considered earlier, when we had some probabilistic framework. But of course, it is possible (and necessary) to build bridges between those two cultures…

References are online here. As explained in the introduction, it is some sort of online version of an introduction to our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics (in English and in French).

Foundations of Machine Learning, part 1

This post is the fifth one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 4 is online here.

In parallel with these tools developed by, and for economists, a whole literature has been developed on similar issues, centered on the problems of prediction and forecasting. For Breiman (2001a), a first difference comes from the fact that the statistic has developed around the principle of inference (or to explain the relationship linking y to variables \mathbf{x}) while another culture is primarily interested in prediction. In a discussion that follows the article, David Cox states very clearly that in statistic (and econometrics) “predictive success (…) is not the primary basis for model choice“. We will get back here on the roots of automatic learning techniques. The important point, as we will see, is that the main concern of machine learning is related to the generalization properties of a model, i.e. its performance – according to a criterion chosen a priori – on new data, and therefore on non-sample tests.

A learning machine

Today, we speak of “machine learning” to describe a whole set of techniques, often computational, as alternatives to the classical econometric approach. Before characterizing them as much as possible, it should be noted that historically other names have been given. For example, Friedman (1997) proposes to make the link between statistics (which closely resemble econometric techniques – hypothesis testing, ANOVA, linear regression, logistics, GLM, etc.) and what was then called “data mining” (which then included decision trees, methods from the closest neighbours, neural networks, etc.). The bridge between those two cultures corresponds to “statistical learning” techniques described in Hastie et al (2009). But one should keep in mind that machine learning is a very large field of research.

The so-called “natural” learning (as opposed to machine learning) is that of children, who learn to speak, read and play. Learning to speak means segmenting and categorizing sounds, and associating them with meanings. A child also learns simultaneously the structure of his or her mother tongue and acquires a set of words describing the world around him or her. Several techniques are possible, ranging from rote learning, generalization, discovery, more or less supervised or autonomous learning, etc. The idea in artificial intelligence is to take inspiration from the functioning of the brain to learn, to allow “artificial” or “automatic” learning, by a machine. A first application was to teach a machine to play a game (tic-tac-toe, chess, go, etc.). An essential step is to explain the objective it must achieve to win. One historical approach has been to teach the machine the rules of the game. If it allows you to play, it will not help the machine to play well. Assuming that the machine knows the rules of the game, and that it has a choice between several dozen possible moves, which one should it choose? The classical approach in artificial intelligence uses the so-called min-max algorithm using an evaluation function: in this algorithm, the machine searches forward in the possible moves tree, as far as the calculation resources allow (about ten moves in chess, for example). Then, it calculates different criteria (which have been previously indicated to her) for all positions (number of pieces taken, or lost, occupancy of the center, etc. in our example of the chess game), and finally, the machine plays the move that allows it to maximize its gain. Another example may be the classification and recognition of images or shapes. For example, the machine must identify a number in a handwritten handwriting (checks, ZIP code on envelopes, etc). It is a question of predicting the value of a variable y, knowing that a priori y\in\{0,1,2,\cdots,8,9\}. A classical strategy is to provide the machine with learning bases, in other words here millions of labelled (identified) images of handwritten numbers. A simple (and natural) strategy is to use a decision criterion based on the closest neighbors whose labels are known (using a predefined metric).

The method of the closest neighbors (“k-nearest neighbors”) can be described as follows: we consider (as in the previous part) a set of n observations, i. e. pairs (y_i,\mathbf{x}_i) with \mathbf{x}_i\in\mathbb{R}^p. Let us consider a distance \Delta on \mathbb{R}^p (the Euclidean distance or the Mahalanobis distance, for example). Given a new observation \mathbf{x}\in\mathbb{R}^p, let us assume the ordered observations as a function of the distance between the \mathbf{x}_i and \mathbf{x}, in the sense that \Delta(\mathbf{x}_1, \mathbf{x})\leq\Delta(\mathbf{x}_2, \mathbf{x})\leq\cdots\leq\Delta(\mathbf{x}_n, \mathbf{x}) then we can consider as prediction for y the average of the nearest k neighbours,\widehat{m}_k(\mathbf{x})=\frac{1}{k}\sum_{i=1}^k y_iLearning here works by induction, based on a sample (called the learning – or training – sample).

Automatic learning includes those algorithms that give computers the ability to learn without being explicitly programmed (as Arthur Samuel defined it in 1959). The machine will then explore the data with a specific objective (such as searching for the nearest neighbours in the example just described). Tom Mitchell proposed a more precise definition in 1998: a computer program is said to learn from experience E in relation to a task T and a performance measure P, if its performance on T, measured by P, improves with experience E. Task T can be a defect score for example, and performance P can be the percentage of errors made. The system learns if the percentage of predicted defects increases with experience.

As we can see, machine learning is basically a problem of optimizing a criterion based on data (from now on called learning). Many textbooks on machine learning techniques propose algorithms, without ever mentioning any probabilistic model. In Watt et al (2016) for example, the word “probability” is mentioned only once, with this footnote that will surprise and make smile any econometricians, “the logistic regression can also be interpreted from a probabilistic perspective” (page 86). But many recent books offer a review of machine learning approaches using probabilistic theories, following the work of Vaillant and Vapnik. By proposing the paradigm of “probably almost correct” learning (PAC), a probabilistic flavor has been added to the previously very computational approach, by quantifying the error of the learning algorithm (usually in a classification problem).

To be continued (references are online here)…

L’IA pour prédire les émeutes ?

Il y a quelques semaines, j’avais été contacte par un journaliste qui voulait me poser des questions, suite a notre article Tents, Tweets, and Events: The Interplay Between Ongoing Protests and Social Media. Ça a été l’occasion de me replonger dedans… et de voir ce qui a été écrit depuis… Et ce soir, je découvre un peu par hasard que l’article est paru, dans le numéro de Février de Science & Vie…

Probabilistic Foundations of Econometrics, part 3

This post is the third one of our series on the history and foundations of econometric and machine learning models. Part 2 is online here.

Exponential family and linear models

The Gaussian linear model is a special case of a large family of linear models, obtained when the conditional distribution of Y (given the covariates) belongs to the exponential family f(y_i|\theta_i,\phi)=\exp\left(\frac{y_i\theta_i-b(\theta_i)}{a(\phi)}+c(y_i,\phi)\right) with \theta_i=\psi(\mathbf{x}_i^T \beta). Functions a, b and c are specified according to the type of exponential law (studied extensively in statistics since Darmoix (1935), as Brown (1986) reminds us), and \psi is a one-to-one mapping that the user must specify. Log-likelihood then has a simple expression \log\mathcal{L}(\mathbf{\theta},\phi|\mathbf{y}) =\frac{\sum_{i=1}^ny_i\theta_i-\sum_{i=1}^nb(\theta_i)}{a(\phi)}+\sum_{i=1}^n c(y_i,\phi) and the first order condition is then written \frac{\partial \log \mathcal{L}(\mathbf{\theta},\phi|\mathbf{y})}{\partial \mathbf{\beta}} = \mathbf{X}^T\mathbf{W}^{-1}[\mathbf{y}-\widehat{\mathbf{y}}]=\mathbf{0} based on Müller’s (2011) notations, where \mathbf{W} is a weight matrix (which depends on \beta). Given the link between \theta and the expectation of Y, instead of specifying the function \psi(\cdot) , we will tend to specify the link function g(\cdot) defined by \widehat{y}=m(\mathbf{x})=\mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=g^{-1} (\mathbf{x}^T \beta) For the Gaussian linear regression we consider an identity link, while for the Poisson regression, the natural link (called canonical) is the logarithmic link. Here, as \mathbf{W} depends on \beta (with \mathbf{W}=diag(\nabla g(\widehat{\mathbf{y}})Var[\mathbf{y}]) there is generally no explicit formula for the maximum likelihood estimator. But an iterative algorithm makes it possible to obtain a numerical approximation. By setting \mathbf{z}=g(\widehat{\mathbf{y}})+(\mathbf{y}-\widehat{\mathbf{y}})\cdot\nabla g(\widehat{\mathbf{y}}) corresponding to the error term of a Taylor development in order 1 of g, we obtain an algorithm of the form\widehat{\beta}_{k+1}=[\mathbf{X}^T \mathbf{W}_k^{-1} \mathbf{X}]^{-1} \mathbf{X}^T \mathbf{W}_k^{-1} \mathbf{z}_kBy iterating, we will define \widehat{\beta}=\widehat{\beta}_{\infty}, and we can show that – with some additional technical assumptions (detailed in Müller (2011)) – this estimator is asymptotically Gaussian, with \sqrt{n}(\widehat{\beta} -\beta)\overset{\mathcal{L}}{\rightarrow} \mathcal{N}(\mathbf{0},I(β)^{-1}) where numerically I(\beta)=\varphi\cdot[\mathbf{X}^T \mathbf{W}_\infty^{-1} \mathbf{X}] .

From a numerical point of view, the computer will solve the first-order condition, and actually, the law of Y does not really intervene. For example, one can estimate a “Poisson regression” even when observations are not integers (but they need to be positive). In other words, the law of Y is only an interpretation here, and the algorithm could be introduced in a different way (as we will see later on), without necessarily having an underlying probabilistic model.

Logistic Regression

Logistic regression is the generalized linear model obtained with a Bernoulli’s law, and a link function which is the quantile function of a logistic law (which corresponds to the canonical link in the sense of the exponential family). Taking into account the form of Bernoulli’s law, econometrics proposes a model for y_i\in\{0,1\}, in which the logarithm of the odds follows a linear model: \log\left(\frac{\mathbb{P}[Y=1\vert \mathbf{X}=\mathbf{x}]}{\mathbb{P}[Y\neq 1\vert \mathbf{X}=\mathbf{x}]}\right)=\beta_0+\mathbf{x}^T\beta or \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=\frac{e^{\beta_0+\mathbf{x}^T\beta}}{1+ e^{\beta_0+\mathbf{x}^T\beta}}=H(\beta_0+\mathbf{x}^T\beta) where H(\cdot)=\exp(\cdot)/(1+exp(\cdot)) is the cumulative distribution function of the logistic law. The estimation of (\beta_0,\beta) is performed by maximizing the likelihood: \mathcal{L}=\prod_{i=1}^n \left(\frac{e^{\mathbf{x}_i^T\mathbf{\beta}}}{1+e^{\boldsymbol{x}_i^T\mathbf{\beta}}}\right)^{y_i}\left(\frac{1}{1+e^{\mathbf{x}_i^T\mathbf{\beta}}}\right)^{1-y_i} It is said to be a linear models because isoprobability curves here are the parallel hyperplanes b+\mathbf{x}^T\beta . Rather than this model, popularized by Berkson (1944), some will prefer the probit model (see Berkson, 1951), introduced by Bliss (1934). In this model: \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=\Phi (\beta_0+\mathbf{x}^T\beta)

where \Phi denotes the distribution function of the reduced centred normal distribution. This model has the advantage of having a direct link with the Gaussian linear model, since y_i=\mathbf{1}(y_i^\star>0) with y_i^\star=\beta_0+\mathbf{x}^T \beta+\varepsilon_i where the residuals are Gaussian, \mathcal{N}(0,\sigma^2). An alternative is to have centered residuals of unit variance, and to consider a latent modeling of the form y_i=\mathbf{1}(y_i^\star>\xi) (where \xi will be fixed). As we can see, these techniques are fundamentally linked to an underlying stochastic model. In the body of the article, we present several alternative techniques – from the learning literature – for this classification problem (with two classes, here 0 and 1).

Regression in high dimension

As we mentioned earlier, the first order condition \mathbf{X}^T (\mathbf{X}\widehat{\beta}-\mathbf{y})=\mathbf{0} is solved numerically by performing a QR decomposition, at a cost which consists in O(np^2) operations (where p is the rank of \mathbf{X}^T \mathbf{X}). Numerically, this calculation can be long (either because p is large or because n is large), and a simpler strategy may be to sub-sample. Let n_s\ll n, and consider a sub-sample size n_s of \{1,\cdots,n\}. Then \widehat{\beta}_s=(\mathbf{X}_s^T \mathbf{X}_s )^{-1} \mathbf{X}_s^T\mathbf{y}_s is a good approximation of \beta as shown by Dhillon et al. (2014). However, this algorithm is dangerous if some points have a high leverage (i.e. L_i=\mathbf{x}_i(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{x}_i^T). Tropp (2011) proposes to transform the data (in a linear way), but a more popular approach is to do non-uniform sub-sampling, with a probability related to the influence of observations (defined by I_i=\widehat{\varepsilon}_iL_i/(1-L_i)^2 , and which unfortunately can only be calculated once the model is estimated).

In general, we will talk about massive data when the data table of size does not fit in the RAM memory of the computer. This situation is often encountered in statistical learning nowadays with very often p\ll n. This is why, in practice, many libraries of algorithms assimilated to machine learning use iterative methods to solve the first-order condition. When the parametric model to be calibrated is indeed convex and semi-differentiable, it is possible to use, for example, the stochastic gradient descent method as suggested by Bottou (2010). This last one allows to free oneself at each iteration from the calculation of the gradient on each observation of our learning base. Rather than making an average descent at each iteration, we start by drawing (without replacement) an observation \mathbf{x}_i among the n available. The model parameters are then corrected so that the prediction made from \mathbf{x}_i is as close as possible to the true value y_i. The method is then repeated until all the data have been reviewed. In this algorithm there is therefore as much iteration as there are observations. Unlike the gradient descent algorithm (or Newton’s method) at each iteration, only one gradient vector is calculated (and no longer n). However, it is sometimes necessary to run this algorithm several times to increase the convergence of the model parameters. If the objective is, for example, to minimize a loss function \ell between the estimator m_\beta (\mathbf{x}) and y (like the quadratic loss function, as in the Gaussian linear regression) the algorithm can be summarized as follows:

  • Step 0: Mix the data
  • Iteration step: For t=1,\cdots, n, we pull i\in\{1,\cdots,n\} without replacement, and we set \beta^{t+1} = \beta^{t} - \gamma_t\frac{ \partial{\ell(y_i,m_{\beta^t}(X_i)) } }{ \partial{ \beta}}

This algorithm can be repeated several times as a whole depending on the user’s needs. The advantage of this method is that at each iteration, it is not necessary to calculate the gradient on all observations (more sum). It is therefore suitable for large databases. This algorithm is based on a convergence in probability towards a neighborhood of the optimum (and not the optimum itself).

(references will be given in the very last post of that series) To be continued

Probabilistic Foundations of Econometrics, part 2

This post is the second one of our series on the history and foundations of econometric and machine learning models. Part 1 is online here.

Geometric Properties of this Linear Model

Let’s define the scalar product in \mathbb{R}^n, ⟨\mathbf{a},\mathbf{b}⟩=\mathbf{a}^T\mathbf{b}, and let’s note \|\cdot\| the associated Euclidean standard, \|\mathbf{a}\|=\sqrt{\mathbf{a}^T\mathbf{a}} (denoted \|\cdot\|_{\ell_2} in the next post). Note \mathcal{E}_X the space generated by all linear combinations of the \mathbf{X} components (adding the constant). If the explanatory variables are linearly independent, \mathbf{X} is a full (column) rank matrix and \mathcal{E}_X is a space of dimension p+1. Let’s assume from now on that the variables \mathbf{x}  and y are centered here. Note that no law hypothesis is made in this section, the geometric properties are derived from the properties of expectation and variance in the set of finite variance variables.

With this notation, it should be noted that the linear model is written m(\mathbf{x})=⟨\mathbf{x},\beta⟩. The space H_z=\{\mathbf{x}\in\mathbb{R}^{p+1}:m(\mathbf{x})=z\} is a hyperplane (affine) that separates the space in two. Let’s define the orthogonal projection operator on \mathcal{E}_X, \Pi_X =\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T. Thus, the forecast that can be made for it is: \widehat{\mathbf{y}}=\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T\mathbf{y}=\Pi_X\mathbf{y}. As, \widehat{\varepsilon}=\mathbf{y}-\widehat{\mathbf{y}}=(\mathbb{I}-\Pi_X)\mathbf{y}=\Pi_{X^\perp}\mathbf{y}, we note that \widehat{\varepsilon}\perp\mathbf{x}, which will be interpreted as meaning that residuals are a term of innovation, unpredictable in the sense that \Pi_{X }\widehat{\varepsilon}=\mathbf{0}. The Pythagorean theorem is written here: \Vert \mathbf{y} \Vert^2=\Vert \Pi_{ {X}}\mathbf{y} \Vert^2+\Vert \Pi_{ {X}^\perp}\mathbf{y} \Vert^2=\Vert \Pi_{ {X}}\mathbf{y}\Vert^2+\Vert \mathbf{y}-\Pi_{ {X}}\mathbf{y}\Vert^2=\Vert\widehat{\mathbf{y}}\Vert^2+\Vert\widehat{\mathbf{\varepsilon}}\Vert^2which is classically translated in terms of the sum of squares: \underbrace{\sum_{i=1}^n y_i^2}_{n\times\text{total variance}}=\underbrace{\sum_{i=1}^n \widehat{y}_i^2}_{n\times\text{explained variance}}+\underbrace{\sum_{i=1}^n (y_i-\widehat{y}_i)^2}_{n\times\text{residual variance}} The coefficient of determination, R^2, is then interpreted as the square of the cosine of the angle \theta between \mathbf{y} and \Pi_X \mathbf{y} : R^2=\frac{\Vert \Pi_{{X}} \mathbf{y}\Vert^2}{\Vert \mathbf{y}\Vert^2}=1-\frac{\Vert \Pi_{ {X}^\perp} \mathbf{y}\Vert^2}{\Vert \mathbf {y}\Vert^2}=\cos^2(\theta)An important application was obtained by Frish & Waugh (1933), when the explanatory variables are divided into two groups, \mathbf{X}=[\mathbf{X}_1 |\mathbf{X}_2], so that the regression becomes y=\beta_0+\mathbf{X}_1 β_1+\mathbf{X}_2 β_2+\varepsilon. Frish & Waugh (1933) showed that two successive projections could be considered. Indeed, if \mathbf{y}_2^\star=\Pi_{X_1^\perp} \mathbf{y} and X_2^\star=\Pi_{X_1^\perp}\mathbf{X}_2, we can show that \widehat{\beta} _2=[{\mathbf{X}_2^\star}^T \mathbf{X}_2^\star]^{-1}{\mathbf{X}_2^\star}^T \mathbf{y}_2^\star In other words, the overall estimate is equivalent to the combination of independent estimates of the two models if \mathbf{X}_2^\star=\mathbf{X}_2, i.e. \mathbf{X}_2\in \mathcal{E}_{X_1}^\perp, which can be noted \mathbf{x}_1\perp\mathbf{x}_2 We obtain here the Frisch-Waugh theorem which guarantees that if the explanatory variables between the two groups are orthogonal, then the overall estimate is equivalent to two independent regressions, on each of the sets of explanatory variables. This is a theorem of double projection, on orthogonal spaces. Many results and interpretations are obtained through geometric interpretations (fundamentally related to the links between conditional expectation and the orthogonal projection in space of variables of finite variance).

This geometric interpretation might help to get a better understanding of the problem of under-identification, i.e. the case where the real model would be y_i=\beta_0+ \mathbf{x}_1^T \beta_1+\mathbf{x}_2^T \beta_2+\varepsilon_i, but the estimated model is y_i=b_0+\mathbf{x}_1^T \mathbf{b}_1+\eta_i. The maximum likelihood estimator of \mathbf{b}_1 is \widehat{\mathbf{b}}_1=\mathbf {\beta}_1 + \underbrace{ (\mathbf {X}_1^T\mathbf {X}_1)^{-1} \mathbf {X}_1^T \mathbf {X}_{2} \mathbf{\beta}_2}_{\mathbf{\beta}_{12}}+\underbrace{(\mathbf{X}_1^{T}\mathbf{X}_1)^{-1} \mathbf{X}_1^T\varepsilon}_{\nu}so that \mathbb{E}[\widehat{\mathbf{b}}_1]=\beta_1+\beta_{12}, the bias ( \beta_{12}) being null only in the case where \mathbf{X}_1^T \mathbf{X}_2=\mathbf{0} (i. e. \mathbf{X}_1\perp \mathbf{X}_2 ): we find here a consequence of the Frisch-Waugh theorem.

On the other hand, over-identification corresponds to the case where the real model would be y_i=\beta_0+\mathbf{x}_1^T \beta_1+\varepsilon_i, but the estimated model is y_i=b_0+ \mathbf{x}_1^T \mathbf{b} _1+\mathbf{x}_2^T \mathbf{b}_2+\eta_i. In this case, the estimate is unbiased, in the sense that \mathbb{E}[\widehat{\mathbf{b}}_1]=\beta_1 but the estimator is not efficient. Later on, we will discuss an effective method for selecting variables (and avoid over-identification).

From parametric to non-parametric

We can rewrite equation (4) in the form \widehat{\mathbf{y}}=\Pi_X\mathbf{y} which helps us to see the forecast directly as a linear transformation of the observations. More generally, a linear predictor can be obtained by considering m(\mathbf{x})=\mathbf{s}_{\mathbf{x}}^T \mathbf{y}, where \mathbf{s}_{\mathbf{x}} is a weight vector, which depends on \mathbf{x}, interpreted as a smoothing vector. Using the vectors \mathbf{s}_{\mathbf{x}_i}, calculated from the observations \mathbf{x}_i, we obtain a matrix \mathbf{S} of size n\times n, and \widehat{\mathbf{y}}=\mathbf{S}\mathbf{y}. In the case of the linear regression described above, \mathbf{s}_{\mathbf{x}}=\mathbf{X}[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{x}, and in that case \text{trace}(\mathbf{S}) is the number of columns in the \mathbf{X} matrix (the number of explanatory variables). In this context of more general linear predictors, \text{trace}(\mathbf{S}) is often seen as equivalent to the number of parameters (or complexity, or dimension, of the model), and \nu=n-\text{trace}(\mathbf{S}) is then the number of degrees of freedom (see Ruppert et al., 2003; Simonoff, 1996). The principle of parsimony says that we should minimize this dimension (the trace of the matrix \mathbf{S}) as much as possible. But in the general case, this dimension is more to obtain, explicitely.

The estimator introduced by Nadaraya (1964) and Watson (1964), in the case of a simple non-parametric regression, is also written in this form since\widehat{m}_h(x)=\mathbf{s}_{x}^T\mathbf{y}=\sum_{i=1}^n \mathbf{s}_{x,i}y_iwhere\mathbf{s}_{x,i}=\frac{K_h(x-x_i)}{K_h(x-x_1)+\cdots+K_h(x-x_n)} where K(\cdot) is a kernel function, which assigns a value that is lower the closer x_i is to x, and h>0 is the bandwidth. The introduction of this metaparameter h is an important issue, as it should be chosen wisely. Using asymptotic developments, we can show that if X has density f, \text{biais}[\widehat{m}_h(x)]=\mathbb{E}[\widehat{m}_h(x)]-m(x)\sim {h^2}\left(\frac{C_1 }{2}m''(x)+C_2 m'(x)\frac{f'(x)}{f(x)}\right)and \displaystyle{{\text{Var}[\widehat{m}_h(x)]\sim\frac{C_3}{{nh}}\frac{\sigma(x)}{f(x)}}}for some constants that can be estimated (see Simonoff (1996) for a discussion). These two functions evolve inversely with h, as shown in Figure 1 (where the metaparameter on the x-axis is here, actually, h^{-1}). Keep in ming that we will see a similar graph in the context of machine learning models.

Figure 1. Choice of meta-parameter and the Goldilocks problem: it must not be too large (otherwise there is too much variance), nor too small (otherwise there is too much bias).

The natural idea is then to try to minimize the mean square error, the MSE, defined as bias[\widehat{m}_h (x)]^2+Var[\widehat{m}_h (x)], and them integrate over x, which gives an optimal value for h of the form h^\star=O(n^{-1/5}) , and reminds us of Silverman’s rule – see Silverman (1986). In larger dimensions, for continuous \mathbf{x} variables, a multivariate kernel with matrix bandwidth \mathbf{H} can be used, and \mathbb{E}[\widehat{m}_{\mathbf{H}}(\mathbf{x})]\sim m(\mathbf{x})+\frac{C_1}{2}\text{trace}\big(\mathbf{H}^Tm''(\mathbf{x})\mathbf{H}\big)+C_2\frac{m'(\boldsymbol{x})^T\mathbf{H}\mathbf{H}^T \nabla f(\mathbf{x})}{f(\mathbf{x})}while\text{Var}[\widehat{m}_{\mathbf{H}}(\mathbf{x})]\sim\frac{C_3}{n~\text{det}(\mathbf{H})}\frac{\sigma(\mathbf{x})}{f(\mathbf{x})}
If \mathbf{H} is a diagonal matrix, with the same term h  on the diagonal, then h^\star=O(n^{-1/(4+dim(\mathbf{x}))}. However, in practice, there will be more interest in the integrated version of the quadratic error, MISE(\widehat{m}_{h})=\mathbb{E}[MSE(\widehat{m}_{h}(X))]=\int MSE(\widehat{m}_{h}(x))dF(x)and we can prove that MISE[\widehat{m}_h]\sim \overbrace{\frac{h^4}{4}\left(\int x^2k(x)dx\right)^2\int\big[m''(x)+2m'(x)\frac{f'(x)}{f(x)}\big]^2dx}^{\text{bias}^2} +\overbrace{\frac{\sigma^2}{nh}\int k^2(x)dx \cdot\int\frac{dx}{f(x)}}^{\text{variance}}as n→∞ and nh→∞. Here we find an asymptotic relationship that again recalls Silverman’s (1986) order of magnitude, h^\star =n^{-\frac{1}{5}}\left(\frac{C_1\int \frac{dx}{f(x)}}{C_2\int \big[m''(x)+2m'(x)\frac{f'(x)}{f(x)}\big]dx}\right)^{\frac{1}{5}}The main problem here, in practice, is that many of the terms in the expression above are unknown. Automatic learning offers computational techniques, when the econometrician used to searching for asymptotic (mathematical) properties.

To be continued (references mentioned above are online here)…

Probabilistic Foundations of Econometrics, part 1

In a series of posts, I wanted to get into details of the history and foundations of econometric and machine learning models. It will be some sort of online version of our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics. This is the first one…

The importance of probabilistic models in economics is rooted in Working’s (1927) questions and the attempts to answer them in Tinbergen’s two volumes (1939). The latter have subsequently generated a great deal of work, as recalled by Duo (1993) in his book on the foundations of econometrics, and more particularly in the first chapter “The Probability Foundations of Econometrics”. It should be recalled that Trygve Haavelmo was awarded the Nobel Prize in Economics in 1989 for his “clarification of the foundations of the probabilistic theory of econometrics”. Because as Haavelmo (1944) (initiating a profound change in econometric theory in the 1930s, as recalled in Morgan’s Chapter 8 (1990)) showed, econometrics is fundamentally based on a probabilistic model, for two main reasons. First, the use of statistical quantities (or “measures”) such as means, standard errors and correlation coefficients for inferential purposes can only be justified if the process generating the data can be expressed in terms of a probabilistic model. Second, the probability approach is relatively general, and is particularly well suited to the analysis of “dependent” and “non-homogeneous” observations, as they are often found on economic data.We will then assume that there is a probabilistic space (\Omega,\mathcal{F},\mathbb{P}) such that observations (y_i,\mathbf{x}_i) are seen as realizations of random variables (Y_i, \mathbf{X}_i) . In practice, however, we are not very interested in the joint law of the couple (Y, \mathbf{X}) : the law of \mathbf{X} is unknown, and it is the law of Y conditional on \mathbf{X} that will be interested in. In the following, we will note x a single observation, \mathbf{x} a vector of observations, X a random variable, and \mathbf{X} a random vector. Abusively, \mathbf{X} may also designate the matrix of individual observations (denoted \mathbf{x}_i), depending on the context.

Foundations of mathematical statistics

As recalled in Vapnik’s (1998) introduction, inference in parametric statistics is based on the following belief: the statistician knows the problem to be analyzed well, in particular, he knows the physical law that generates the stochastic properties of the data, and the function to be found is written via a finite number of parameters[1]. To find these parameters, the maximum likelihood method is used. The purpose of the theory is to justify this approach (by discovering and describing its favorable properties). We will see that in learning, philosophy is very different, since we do not have a priori reliable information on the statistical law underlying the problem, nor even on the function we would like to approach (we will then propose methods to construct an approximation from the data at our disposal, as in (1998)). A “golden age” of parametric inference, from 1930 to 1960, laid the foundations for mathematical statistics, which can be found in all statistical textbooks, including today. As Vapnik (1998) states, the classical parametric paradigm is based on the following three beliefs:

  1. To find a functional relationship from the data, the statistician is able to define a set of functions, linear in their parameters, that contain a good approximation of the desired function. The number of parameters describing this set is small.
  2. The statistical law underlying the stochastic component of most real-life problems is the normal law. This belief has been supported by reference to the central limit theorem, which stipulates that under large conditions the sum of a large number of random variables is approximated by the normal law.
  3. The maximum likelihood method is a good tool for estimating parameters.

In this section we will come back to the construction of the econometric paradigm, directly inspired by that of classical inferential statistics.

Conditional laws and likelihood

Linear econometrics has been constructed under the assumption of individual data, which amounts to assuming independent variables (Y_i, \mathbf{X}_i) (if it is possible to imagine temporal observations – then we would have a process (Y_t, \mathbf{X}_t) – but we will not discuss time series here). More precisely, we will assume that, conditionally to the explanatory variables \mathbf{X}_i, the variables Y_i are independent. We will also assume that these conditional laws remain in the same parametric family, but that the parameter is a function of \mathbf{x}. In the Gaussian linear model it is assumed that: (Y\vert \mathbf{X}=\mathbf{x})\overset{\mathcal{L}}{\sim}\mathcal{N}(\mu(\mathbf{x}),\sigma^2)~~~~ (1)where \mu(\mathbf{x})=\beta_0+\mathbf{x}^T\mathbf{\beta} and \mathbf{\beta}\in\mathbb{R}^{p}.

It is usually called a ‘linear’ model since \mathbb{E}[Y\vert \mathbf{X}=\mathbf{x}]=\beta_0+\mathbf{x}^T\mathbf{\beta} is a linear combination of covariates[2]. It is said to be a homoscedastic model if Var[Y|\mathbf{X}=\mathbf{x}]=\sigma^2, where \sigma^2 is a positive constant. To estimate the parameters, the traditional approach is to use the Maximum Likelihood estimator, as initially suggested by Ronald Fisher. In the case of the Gaussian linear model, log-likelihood is written:  \log\mathcal{L}(\beta_0, \mathbf{\beta},\sigma^2\vert \mathbf{y},\mathbf{x}) = -\frac{n}{2}\log[2\pi\sigma^2] - \frac{1}{2\sigma^2}\sum_{i=1}^n (y_i-\beta_0-\mathbf{x}_i^T\mathbf{\beta})^2Note that the term on the right, measuring a distance between the data and the model, will be interpreted as deviance in generalized linear models. Then we will set: (\widehat{\beta}_0,\widehat{\mathbf{\beta}},\widehat{\sigma}^2)=\text{argmax}\left\lbrace\log\mathcal{L}(\beta_0, \mathbf{\beta},\sigma^2\vert \mathbf{y},\mathbf{x})\right\rbraceThe maximum likelihood estimator is obtained by minimizing the sum of the error squares (the so-called “least squares” estimator) that we will find in the “machine learning” approach.

The first order conditions allow to find the normal equations, whose matrix writing is \mathbf{X}^T[\mathbf{y}-\mathbf{X}\mathbf{\beta}]=\mathbf{0}, which can also be written (\mathbf{X}^T \mathbf{X})\mathbf{\beta}=\mathbf{X}^T \mathbf{y}. If \mathbf{X} is a full (column) rank matrix, then we find the classical estimator:\widehat{\mathbf{\beta}}=(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}=\mathbf{\beta}+(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^{-1}\mathbf{\varepsilon}~~~(2)using residual-based writing (as often in econometrics), y=\mathbf{x}^T\mathbf{\beta}+\varepsilon. Gauss Markov’s theorem ensures that this estimator is the unbiased linear estimator with minimum variance. It can then be shown that \widehat{\mathbf{\beta}}\sim\mathcal{N}(\mathbf{\beta},\sigma^2(\mathbf{X}^T\mathbf{X})^{-1}), and in particular, if we simply need the first two moments : \mathbb{E}[\widehat{\mathbf{\beta}}]=\mathbf{\beta}~~~Var[\widehat{\mathbf{\beta}}]=\sigma^2 [\mathbf{X}^T\mathbf{X}]^{-1}In fact, the normality hypothesis makes it possible to make a link with mathematical statistics, but it is possible to construct this estimator given by equation (2) without that Gaussian assumption. Hence, if we assume that Y|\mathbf{X} has the same distribution as \mathbf{x}^T\mathbf{\beta}+\varepsilon, where \mathbb{E}[\varepsilon]=0, Var[\varepsilon]=\sigma^2 and Cov[X_j,\varepsilon]=0 for all j, then \widehat{\mathbf{\beta}} is an unbiased estimator of \mathbf{\beta} with smallest variance[3] among unbiased linear estimators. Furthermore, if we cannot get normality at finite distance, asymptotically this estimator is Gaussian, with \sqrt{n}(\widehat{\mathbf{\beta}}-\mathbf{\beta})\overset{\mathcal{L}}{\rightarrow}\mathcal{N}(\mathbf{0},\mathbf{\Sigma})as n\rightarrow\infty, for some matrix \mathbf{\Sigma}.
The condition of having a full rank \mathbf{X} matrix can be (numerically) strong in large dimensions. If it is not satisfied, (\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T does not exist. If \mathbb{I} denotes the identity matrix, however, it should be noted that (\mathbf{X}^T \mathbf{X}+\lambda\mathbb{I})^{-1}\mathbf{X}^T still exists, whatever \lambda>0. This estimator is called the ridge estimator of level \lambda (introduced in the 1960s by Hoerl (1962), and associated with a regularization studied by Tikhonov (1963)). This estimator naturally appears in a Bayesian econometric context.

Residuals

It is not uncommon to introduce the linear model from the distribution of the residuals, as we mentioned earlier. Also, equation (1) is written as often: y_i=\beta_0+\mathbf{x}_i^T\mathbf{\beta}+\varepsilon_i~~~~(3)where \varepsilon_i’s are realizations of independent and identically distributed random variables (i.i.d.) from some \mathcal{N}(0,\sigma^2) distribution. With a vector notation, we will write \mathbf{\varepsilon}\overset{\mathcal{L}}{\sim}\mathcal{N}(\mathbf{0},\sigma^2\mathbb{I}) . The estimated residuals are defined as: \widehat{\varepsilon}_i =y_i-[\widehat{\beta}_0+\mathbf{x}_i^T\widehat{\mathbf{\beta}}] Those (estimated) residuals are basic tools for diagnosing the relevance of the model.

An extension of the model described by equation (1) has been proposed to take into account a possible heteroscedastic character: (Y\vert \mathbf{X}=\mathbf{x})\overset{\mathcal{L}}{\sim}\mathcal{N}(\mu(\mathbf{x}),\sigma^2(\mathbf{x}))where \sigma^2(\mathbf{x}) is a positive function of the explanatory variables. This model can be rewritten as: y_i=\beta_0+\mathbf{x}_i^T\mathbf{\beta}+\sigma^2(\mathbf{x}_i)\cdot\varepsilon_iwhere residuals are always i.i.d., with unit variance, \varepsilon_i=\frac{y_i-[\beta_0+\mathbf{x}_i^T\mathbf{\beta}]}{\sigma(\mathbf{x}_i)} While residuals based equations are popular in linear econometrics (when the dependent variable is continuous), it is no longer popular in counting models, or logistic regression.

However, writing using an error term (as in equation (3)) raises many questions about the representation of an economic relationship between two quantities. For example, it can be assumed that there is a relationship (linear to begin with) between the quantities of a traded good, q and its price p. This allows us to imagine a supply equationq_i=\beta_0+\beta_1 p_i+u_i(u_i being an error term) where the quantity sold depends on the price, but in an equally legitimate way, one can imagine that the price depends on the quantity produced (what one could call a demand equation), p_i=\alpha_0+\alpha_1 q_i+v_i(v_i denoting another error term). Historically, the error term in equation (3) could be interpreted as an idiosyncratic error on the variable y, the so-called explanatory variables being assumed to be fixed, but this interpretation often makes the link between an economic relationship and a complicated economic model difficult, the economic theory speaking abstractly about a relationship between a magnitude, the econometric model imposing a specific shape (what magnitude is y and what magnitude is x) as shown in more detail in Morgan (1990) Chapter 7.

(references mentioned above are online here). To be continued…

[1] This approach can be compared to structural econometrics, as presented for example in Kean (2010).

[2] Here, we will try to distinguish \beta_0, the intercept, and the other parameters \mathbf{\beta}, since they are considered differently in many extensions (e.g. regularization). Nevertheless, in many expressions \mathbf{\beta} will denote the joint vector (\beta_0, \mathbf{\beta}), for general formulas, to avoid too heavy notations.

[3] In the sense that the difference between variance matrices is a positive matrix.

Gini Regressions and Heteroskedasticity

Our joint paper “Gini Regressions and Heteroskedasticity” with Ndéné Ka, Stéphane Mussard and Oumar Hamady Ndiaye just appear in Econometrics.

We propose an Aitken estimator for Gini regression. The suggested A-Gini estimator is proven to be a U-statistics. Monte Carlo simulations are provided to deal with heteroskedasticity and to make some comparisons between the generalized least squares and the Gini regression. A Gini-White test is proposed and shows that a better power is obtained compared with the usual White test when outlying observations contaminate the data

Les réseaux pour réinventer l’assurance ?

La théorie des réseaux, ou des graphs, est née en 1735, suite aux travaux de Léonard Euler, qui essayait de trouver une promenade – à partir d’un point donné – qui fasse revenir à ce point en passant une fois et une seule par chacun des sept ponts de la ville de Königsberg. On peut rapprocher ces réseaux des réseaux de métro, constitués de stations (les nœuds), liés entre deux par des rails, ou pas, ou plus généralement un réseau routier, pouvant donner lieu à des études de congestion, par exemple. Mais les réseaux sont aujourd’hui surtout sociaux, reliant les personnes, par des liens d’amitiés, professionnels, familiaux, ou monétaires. L’analyse des réseaux permet de créer des communautés relativement homogène, acceptant de partager un risque, recréant une mutualisation.

Réseau et crédit

En généalogie, on aura des réseaux hiérarchiques, un enfant étant lié à ses parents, eux-mêmes reliés à leurs parents. En sociologie, les réseaux sociaux permettent d’analyser les liens entre des individus (ou des organisations) au sein d’un ensemble. On pourra étudier les amitiés dans une cour d’école (un lien pouvant être une invitation à un anniversaire) ou des échanges de messages électroniques dans une entreprise (la base des courriels d’Enron a ainsi été abondamment utilisée, avec plus de 180 000 messages échangés entre 36 000 employés[i]). La Figure 1 montre ainsi deux réseaux de 20 individus (A, B, …, T).

Figure 1 : réseaux aléatoires, 20 nœuds (Watts-Strogatz et Barbasi),

Dans une vision de type Facebook ou Linkedin, on dira que E et F sont liés, au sens « amis », s’il existe un segment reliant les points E et F. Un réseau peut être dirigé, par exemple si on étudie les échanges de messages (E a écrit à F), ou des prêts d’argent (E a prêté de l’argent à F). Si historiquement seule l’adjacence était étudiée (existence ou non de liens), on peut aujourd’hui rajouter des poids, par exemple le montant d’un prêt financier. Babutsidze (2012) étudie ainsi les positions de banques françaises et allemandes dans les prêts interbancaires au sein de la zone Europe (les nœuds sont alors les banques). L’étude des réseaux au sein de communautés de villages dans les pays en développement a permis de mieux comprendre les mécanismes de finance informelle. Banerjee et al. (2013) étudient ainsi la diffusion de l’information dans un réseau, et plus particulièrement les prêts de microfinance.

Si les réseaux sont utiles pour mieux organiser le microcrédit, CNN notait en 2015 que Facebook permettait à des organismes de crédit d’utiliser le réseau social d’un emprunteur pour déterminer s’il représente un bon risque de crédit, ou pas. En particulier, si le score de crédit des amis était trop faible, une personne pouvait se voir refuser un crédit. Cette situation est dangereuse à cause de propriétés particulières des réseaux, et plus particulièrement le paradoxe des amis.

Du tout petit monde au paradoxe des amis

En 1929, Frigyes Karinthy a émis l’hypothèse que toute personne sur terre pouvait être relliée à n’importe quelle autre par une succession de relations individuelles comprenant au plus 6 maillons. « Nous devrions sélectionner n’importe quelle personne du 1,5 milliard d’habitants de la planète, n’importe qui, n’importe où. Il paraît que, n’utilisant pas plus de cinq individus, l’un d’entre eux étant une connaissance personnelle, il pourrait contacter les individus choisis en n’utilisant rien d’autre que le réseau des connaissances personnelles ». Cette théorie des six poignées de main a vu son origine  dans une nouvelle littéraire. Il faudra attendre les travaux de

Michael Gurevich dans les années 60, puis Stanley Milgram dix ans après, pour voir apparaître les premières tentatives de quantification de ces relations, sous le nom de « Small World Problem ». Si Leskovec & Horvitz (2008) ont confirmé cet ordre de grandeur, via l’analyse de plusieurs milliards de messages échangés à l’aide de la plateforme Windows Live Messenger, plus récemment, Baghat et al. (2016) ont estimé que deux personnes quelconques sur Facebook étaient connectées par une moyenne de trois personnes et demi. Sur le réseau aléatoire de gauche, une personne a, en moyenne, 2 amis, alors qu’un ami pris au hasard a en moyenne 2.25 amis. Sur le réseau de droite, l’écart est encore plus important, car si là aussi une personne a, en moyenne, 2 amis, un ami pris au hasard aura en moyenne plus de 4 amis.

Figure 2 : réseaux aléatoires, 500 nœuds (Watts-Strogatz et Barbasi)

Ce paradoxe, observé en 1991 par le sociologue Scott Feld se démontre très facilement. Heuristiquement, on peut voir un lien avec la propriété probabiliste\frac{\mathbb{E}[X^2]}{\mathbb{E}[X]}=\mathbb{E}[X]+\frac{\text{Var}[X]}{\mathbb{E}[X]}>\mathbb{E}[X]où le terme de gauche est le nombre d’amis de mes amis, divisé par mon nombre d’amis. La différence est d’autant plus grande que la dispersion du nombre d’amis est importante. Si le réseau de gauche est très dense, celui de droite en revanche possède une propriété de loi puissance : la distribution du nombre d’amis suit une loi en fonction puissance (ou loi de Zipf, ou de Pareto). La Figure 3 montre la distribution du nombre d’amis sur un réseau, dans double échelle logarithmique : la linéarité indique une distribution en fonction puissance. On retrouve ce genre de distribution dans un très grand nombre de réseaux, en particulier Facebook, comme l’a montré Wohlgemuth & Matache (2014).

Figure 3 : distribution du nombre d’amis sur des réseaux aléatoires simulés (Watts-Strogatz et Barbasi en rouge)

L’interprétation classique est que certaines personnes sont centrales dans le réseau, avec un très grand nombre de connexions. Cette propriété est très connue en marketing (on parlera alors d’effet de pair, « peer effect ») mais elle a des impacts aussi en gestion des risques, ou en santé publique. Chrisakis & Fowler (2010) ont ainsi montré que les épidémies de grippe peuvent être détectées près de deux semaines en avance, en surveillant l’infection dans un réseau social. En particulier, l’analyse de la santé des personnes centrales dans un réseau est « an ideal way to predict outbreaks, but detailed information doesn’t exist for most groups, and to produce it would be time-consuming and costly ». Pour revenir à l’exemple du score de crédit, si ce dernier se trouve être corrélé aux nombres d’amis, le paradoxe des amis rend dangereuse l’utilisation du score des amis comme indicateur du risque d’un individu !

L’importance de l’homophilie

Un autre trait important des réseaux est la notion d’homophilie, introduite en 2001 en sociologie par deux articles importants, correspondant à la tendance à être connecté à ses semblables. McPherson et al (2001) partait du principe que la similitude engendre la connexion, et par conséquent, les réseaux personnels des gens sont homogènes sur de nombreuses caractéristiques sociodémographiques, comportementales et intrapersonnelles. Moody (2001) étudiait ainsi les amitiés dans les cours de récréations à l’école primaire, aux États-Unis, et plus particulièrement les amitiés interraciales. Easley & Kleinberg (2010) présente ainsi de nombres conséquences de l’homophilie, allant de la constitution des tables lors de repas d’affaire, à l’attribution de crédit aux États-Unis. La mesure de l’homophilie revient à se demander, compte tenu de groupes préexistants (en fonction du genre, de l’âge, de la catégorie socioprofessionnelle, etc) comment se répartissent les liaisons, entre les groupes, ou à intérieur des groupes.

Figure 4 : faible homophilie (en haut) et forte homophilie (en bas).

Dans un contexte d’assurance, un actuaire cherche à créer des classes tarifaires, des groupes homogènes en termes de risques, suivant des variables explicatives (les variables dites tarifaires). Les personnes qui habitent au même endroit, qui conduisent les mêmes types de véhicules, et qui ont les mêmes caractéristiques, auront de fortes chances d’être dans la même classe. Mais si l’homophilie existe dans une population, un groupe tarifaire pourrait peut-être s’observer sur réseau d’amis. Pourquoi ne pas alors envisager de créer des groupes au sein d’un réseau ?

Utiliser les réseaux en assurance

Dans cet esprit, en 2010, Friendsurance a été lancé en Allemagne et compte en 2018 plus de 100000 assurés[i]. En France, une courte expérience d’assurance collaborative avait été lancée en 2015, avec Inspeer, proposant de mutualiser avec ses proches les franchises d’assurance dommage (en assurance auto ou habitation) entre amis. Ces types d’assurances collaboratives, parfois appelés assurance pair à pair – ou « peer-to-peer insurance » –  reposent sur la constitution de petits groupes par un courtier. Une partie des primes d’assurance versées est versée à un fonds collectif, l’autre partie à une compagnie d’assurance tierce. Les dommages mineurs subis par le preneur d’assurance sont d’abord pris en charge par ce fonds de groupe. Pour les sinistres dépassant la franchise, il est fait appel à l’assureur habituel. Un groupe peut être constitué par les assurés, formant un réseau social un peu comme Facebook. Dans ce modèle, la seule exigence est que tous les membres du groupe doivent avoir le même type d’assurance (par exemple une assurance responsabilité civile avec une assurance des frais juridiques).

Comme le notait Schiller (2013), ce type de mécanisme a beaucoup de vertus, la première étant de diminuer les coûts, et le risque de fraude. On n’a en effet pas tendance à tricher sur le coût d’un sinistre lorsque le risque est porté par des membres de la famille, ou des amis. L’anonymat de la mutualité qui existe dans la loi des grands nombres disparait. Mais n’est-on pas en train de réinventer la version 2.0. des associations tontinières, avec le retour en force de la mutualisation des risques au sein de communautés soudées ?

Références

Joshua Angrist. The perils of peer effects. Labor Economics, 30, 98-108, 2014

Zakaria Babutsidze. Positions of French and German Banks in European interbank lending network. OFCE, Mars 2012.

Abhijit Banerjee, Arun Chandrasekhar, Esther Duflo & Matthew Jackson. Diffusion of Microfinance. Science, 341,

Smriti Bhagat, Moira Burke, Carlos Diuk, Ismail Onur Filiz & Sergey Edunov. Three and a half degrees of separation. Facebook Research, 2016.

Ananya Bhattacharya. Facebook patent: Your friends could help you get a loan – or not. 4 août 2015, CNN,

Nicholas Christakis & James Fowler Social Network Sensors for Early Detection of Contagious Outbreaks. PLoS ONE. 5 (9): e12948. arXiv:1004.4792 2015

David Easley & Jon Kleinberg. Networks, Crowds, and Markets. Cambridge University Press. 2010.

Scott Feld. Why your friends have more friends than you do, American Journal of Sociology, 96 (6): 1464–1477, 1991.

Matthew Jackson. Social and Economic Networks. Princeton University Press, 2010.

Jure Leskovec & Eric Horvitz. Planetary-Scale Views on a Large Instant-Messaging Network. Microsoft Research, 2008.

Miller McPherson, Lynn Smith-Lovin & James Cook. Birds of a Feather: Homophily in Social Networks. Annual Review of Sociology. 27: 415–444, 2001.

James Moody. Race, School Integration, and Friendship Segregation in America. American Journal of Sociology, 107 (3): 679-716, 2001.

Wesley Perkins, Michael Haines & Richard Rice. Misperceiving the college drinking norm and related problems: a nationwide study of exposure to prevention information, perceived norms and student alcohol misuse. Journal of Studies on Alcohol 66 (4) : 470-478, 2005.

Ben Schiller. A Social Network For Insurance That Cuts Costs And Reduces Fraud. Fast Company, October 2013,

Brad Walker. How Peer-to-Peer Companies Are Transforming the Insurance Sector. The Street, Avril 2016,

Jason Wohlgemuth & Mihaela Matache. Small-World Properties of Facebook Group

Networks. Complex Systems, 23 (2014).

[i] Les données complètes sont en ligne sur https://snap.stanford.edu/data/email-Enron.html

[ii] https://www.friendsurance.com/ et https://www.inspeer.me/ respectivement

Mapping cities

a French version of this article is online at http://variances.eu/

Issue 53 of Insee Analyses Ile-de-France provides an analysis of “a social mosaic specific to Paris“, with the map in Figure 1.

Figure 1 : INSEE, Insee Analyses 53, 2017

This map is a priori familiar to many people, in the sense that we quickly recognize the city represented, we know how to quickly find different elements, and we know how to read the information presented, almost instinctively. In urban history, the way we saw, and how we represented the maps, has often been the basis of urban planning. Changing representation has made it possible to change the structure of cities. We will take up here the two major historical turning points, mentioned in Söderström (1996), based on two recent works: the representation of Rome at the beginning of the Renaissance, and the first iconographic plans, described in Maier (2015), and the “social” or “health” maps of London of Victorian civil servants, described in Vaughan (2018). In particular, the latter are the ancestors of zoning maps, which are widely used in urban planning, but also correspond to the majority of maps produced by statisticians and economists (the INSEE map is an example). And some maps from the last century have nothing to envy to the maps produced today, in the era of big data.

Rome, Leon Battista Alberti and Leonardo Bufalini, and the unchanging motives

Choay (1980) emphasizes the fundamental role in the history of urban planning of Alberti’s De Re Aedificatoria (presented in manuscript form to Pope Nicholas V in 1452, but published only in 1485). The Alberti Treaty is indeed the first text to consider construction (Alberti prefers the term “construction” – ædificatoria – to cover both architecture and urban planning) in terms of an autonomous domain to which the rational method must be applied. The history of representation sees a turning point with the Renaissance, with figurative forms to represent urban space. We will leave the medieval aesthetic with the rediscovery of perspective, which will produce a rationalization of what can be seen, even if it often induces a partial vision of the object. In his treatise, Leon Battista Alberti proposes a scientific method governing the art of building the house, but also the entire city. But it is in Descriptio urbis Romae, probably written at the same time, that he deepened the idea of urban planning, taking the particular example of Rome.

In his book, Alberti does not propose any map of Rome, but a list of instructions to be followed to create one, with the coordinate tables of several important elements of the city, natural, but also artificial. The list includes the ramparts, the river (the Tiber), the city gates, more than thirty public buildings, including the Capitol, which for Alberti is the reference point of the urban plan. He proposed to represent the city by using a disc divided into 48 portions, and by using the distance to the Capitol (in addition to a compass) to place any building. All calculations are detailed in Ludi Matematici Descriptio, using triangulation techniques. In 1450, Alberti invented the geometric plan, corresponding to what we would today call the plan of a city, even if the circular shape may surprise at first sight (see Figure 2), and does not correspond to the ichnographic plan that we all use today (obtained by horizontal and geometrical projection on a plan).

Figure 2 : reconstruction of Alberti’s map in Descriptio urbis Romae, by Luigi Vagnetti in Lo studio di Roma negli scritti albertiani (1974). Source: Maier (2015).

Its plan corresponds to the emergence of a new mode of representation, very geometric. But it was not until Leonardo Bufalini’s plan in 1531 that the first ichnographic plan arrived (it would be unfair to forget Imola’s plan drawn in 1503 by Leonardo da Vinci). If Alberti’s plan indicated the coordinates of a building, Bufalini decided to incorporate the ground plan of the buildings into his city plan.

Figure 3 : carte de Bufalini, Roma, 1551, British Library Londres. Source : Maier (2015).

But if Alberti’s plan has had such an impact, it is also because it came at the time when Pope Nicholas V launched a plan to rebuild Rome, covering an entire district, from Castel Sant’Angelo to the Vatican. This is probably the first urban planning on this scale, proposing to use the urban form as an instrument of social engineering. Alberti’s representation helped this project, with a scientific vision of the map, no longer depending on the artist’s artistic skills, or to inscribe the map in a story that would give it meaning. This urban map is self-sufficient, containing the terms of its own meaning. In Latour’s (1989) terminology, these representations that can be detached from the place (or object) they represent, “while remaining immutable so that they can be moved in all directions without further distortion, loss or corruption” correspond to immutable motives. Alberti’s map is one of the first examples of these immutable mobiles. It juxtaposes the natural and the human construction, the profane and the sacred, placing measurement and position as the only values.

These plans see the urban space as a whole, not offering a single point of view, such as Jacopo Filippo Foresti’s more classic maps (for the time), for example (see Figure 4). It is possible to take Foresti’s point of view to see his map. Alberti’s map exists only as an abstract object.

Figure 4 : view of Rome by Jacopo Filippo Foresti, 1490. Source: Maier (2015).

If Leonardo Bufalini’s map revolutionized urban mapping, and if the iconographic plan is the dominant representation today, these maps have long remained marginal, because they were exclusively reserved for administrative, military or administrative purposes. The map of Foresti has not completely disappeared: it can be found in tourist maps, for example, which are not very concerned about proportions, simply seeking to stage monuments or to indicate itineraries. We then contrast an often local, horizontal vision (on a human scale) with a vision sometimes called zenithal which proposes to conceive objects in abstract terms. It is the latter that makes it possible to represent the city in the form of different neighbourhoods, with different levels of wealth for example, resulting in geometric plans for social statistics in Victorian times, making it possible to be the subject of census, measurement and comparison.

Also noteworthy is the 1748 map of Rome created by Giambattista Nolli. Previously, Leonardo Bufalini proposed to take the point of view of an eagle, flying over the city. Nolli established the now common practice of representing entire cities from above without a single focal point, each block being considered as if the cartographer were directly above it.

Figure 5 : Giambattista Nolli’s map of Rome, 1748. Source: Sylvain Mottet.

London, Thomas More and Charles Booth, and the zoning maps

At the end of the 19th century (from 1870 onwards) Germany saw the first “social maps”, born in the context of an increasingly dense urban population, high social tensions and deteriorating health conditions. German planners proposed a vision of the innovative city as a living organism that needed to be made to function more efficiently. In 1876, Reinhard Baumeister in Stadterweiterungen in technischer, baupolizeilicher und wirtschaftlicher Beziehung and especially Josef Stübben in Der Städtebau, in 1890, proposed the first urban planning manuals. Thus, towards the end of the first chapter, Baumeister proposes to use an urban expansion plan, a master plan to organize the future urban space. For him, it was a question of ensuring the stability and proper functioning of a city designed as a living organism to deal with the problems it faces: overcrowding in certain districts, traffic and hygiene problems, social unrest, etc. To do this, he suggests specializing the city’s sectors in functional and social terms – what we will later call a “zoning plan” (or Bauzonenplan) – and ensuring the sustainability of this specialization. However, he warns against an overly rigid and inflexible master plan: urban development cannot be planned with too much precision, and it is therefore counterproductive to want to freeze it in a totally predetermined framework. Its plan aims to provide general guidelines necessary for the cohesion of the urban organization. In particular, he notes that the more guidelines there are, the more they will have to be the subject of local plans with a limited time horizon.

While the zoning plan was not originally conceived as part of the management plan, it quickly became the key document, its clearest and most effective part. The objective was to understand, at a glance, the whole city as part of an administrative project. It is not only a question of having an overall vision of the city (which the iconographic plan already allowed) but also of using colour codes that facilitate the total regulation of this city. In particular, this zoning plan made it possible to predict several years or even decades in advance what the morphological and functional characteristics of a given area would be. In particular, it allowed investors to anticipate the future of an area and guarantee a certain return on their investments.

This vision proposed by Baumeister thus made it possible to see better, for example, that the most bourgeois areas were often located in the west of the cities. This position is simply because these areas are often healthier from a health point of view: the smoke and smog produced by cities are dispersed in the upper layers of the atmosphere, and when the wind comes from the west (which happens most often in most European cities) the smoke and smog are transported eastwards and towards the lower layers of the atmosphere. From this observation, it becomes natural to build factories in the east and houses in the west. Baumeister’s work was not only theoretical: he worked on the development of the city of Frankfurt in 1891, then Berlin, Cologne, Essen, etc. In Frankfurt, he thus proposed the idea of concentric zones, which was later taken up by many economists. Figure 6 shows this form of a city, in an article published in 1925 by Ernest Burgess (who would later become one of the founders of the Chicago school). At the beginning of the First World War, all German cities had a zoning plan. And in the following years, it was the United States that adopted the concept, with New York in 1916, and more than 500 cities in 1926. In that year, zoning was officially institutionalized, with the approval of the Supreme Court. In 1933, it was the Athens Charter that recognized zoning as the main and central task of urban planning.

Figure 6 : the concentric city, Burgess (1925). Source : Vaugha (2018)

But in parallel with German development, where civil servants imagine the instruments of contemporary urban planning, social planning in England takes place in a context of strong social tensions. The impoverishment of a large part of the population, the many very precarious housing units, the disastrous sanitary conditions and the increase in crime in large cities have made urban development management an extremely sensitive and political subject. It is not surprising to see Patrick Geddes’ work published in Edinburgh, a biologist by training (the city is seen as a living organism) and an anarchist activist, he thought of the image and cartography as a central tool in the fight against poverty. He developed and advocated the use of statistics and mapping in land use planning and urban development, probably more than anyone else at that time. But history will remember Charles Booth’s work in London from 1886 onwards.

Charles Booth, who began as a merchant and shipowner, devoted himself fully to the first social surveys at the end of the 19th century, based on a precise taxonomy of social categories. He was the first to produce social maps covering the entire urban space. His investigations focused first on the East End, London’s most deprived neighbourhood, before spreading throughout the city over more than 17 years. Its objective was to provide a scientific study of the living conditions of the London population in order to put an end to the images of deprived neighbourhoods. As he said in 1902, his objective was to establish “the numerical relation which poverty, misery and depravity bear to regular earnings and comparative comfort, and to describe the general conditions under which each class lives”.

Booth’s approach was based on the creation of a statistical classification of social categories, ranging from A (the lower class) to H (the upper middle class). It has therefore created, on the basis of the notes taken in the field by the inspectors, a taxonomy that distinguishes the different sectors of the social spectrum. He estimated the number of “poor” (classes A-D) at 300,000 people in the East End and 1,300,000 for the city as a whole, almost a third of the total population at the time. The impact of the figures on the public was enormous and was reinforced by the poverty maps that were included in the results volumes dealing first with the East End and then, a few years later, with the entire city, as illustrated in Figure 7.

Figure 7 : Charles Booth Map Descriptive of London Poverty, in 1898. Source : Vaughan (2018). See also https://booth.lse.ac.uk/map/

The map makes it possible to move from a social logic to a spatial logic: a particular class is translated into cartographic terms, becomes a building, a block of houses, a street, an entire urban area. The social map therefore made it possible to think of the city in terms of homogeneous spatial units. This reasoning is essential for urban planning: it could not develop in the context of the complexity of the discourse, distinguishing between the different inhabitants of the same building. This social vision of mapping, with its focus on slums and poor neighbourhoods, should be brought closer to a health objective.

That said, thinking about urban development in terms of health interventions to heal society from its ills is not new. In 1516, Thomas More founded one of the main forms of urban planning theory, starting with a diagnosis of the disease and then proposing a definitive solution through a total restructuring of the urban form. During the 18th century, the translation of this principle consisted in isolating particular intervention areas (characterized by their insalubrity) and removing them, sweeping away the urban past. The solution adopted at the end of the 19th century was rather to work from what already existed, and to find the most effective solutions to manage the probable future changes in the urban context.

At the end of the 19th century, we also moved from “descriptive statistics” to “prescriptive statistics”, to use Ogien’s terms (2013). We no longer simply evaluate the number of smallpox patients, we begin to make the choice to vaccinate (or not) a specific population, and therefore to set up a mandatory preventive intervention (at the time the vaccine still killed about 1 person out of 300).

The homme moyen (average man) by Adolphe Quetelet will launch moral statistics, with the search for people becoming the norm, the average. Diseases are also beginning to be linked to population density, poor ventilation and humidity. “Dirty, unhealthy, infectious, corrupt or simply stinking are the categories that make it possible to think what we now call pollution” in the words of Fureix and Jarrige (2015). We then move from the social map to the “moral map”, a city thought up by hygienists. Moral geography, which until then had been the subject of partial and unsystematized observations, finds in the map a (graphical) space that synthesizes and organizes it. The social map gave the globalizing vision necessary for the existence of urban planning, and for the precise location of the sites necessary for the targeted and rational functioning of its therapeutic action. In mind is Dr. John Snow’s 1854 map of the cholera epidemic, presented (and updated) in Figure 8. At the time, the dominant theory was the theory of miasmas, claiming that diseases such as plague or cholera spread in the form of bad air. In 1854, with the help of the Reverend Henry Whitehead, by interviewing local residents, he established the geographical distribution of cases, and identified the source of the epidemic: a public water pump on Broad Street. While microbial research has not scientifically established the danger of the water pump, the mapping study of the spread of the epidemic has been sufficient to convince the authorities to close it.

Figure 8 : John Snow On the Mode of Communication of Cholera, in 1855. Source :  https://tabsoft.co/2y82nbf

However, as Vaughan (2018) points out, similar works can be found throughout England at the same time, such as Edwin Chadwick’s Sanitary Map of the Town of Leeds, shown in Figure 9. On this map, Chadwick identifies two groups of dwellings: working class houses and shops, workhouses and artisans’ houses. Colour dots, indicating contagious diseases, only seem to proliferate in poor neighbourhoods. In particular, the map noted that the patients did not live in contiguous areas, but that they are scattered around the map, while being located in poor neighbourhoods.

Figure 9 : Edwin Chadwick, Sanitary Map of the Town of Leeds, 1842. Source : Vaughan (2018) et https://bit.ly/2zL3pM8

The maps had considerable public health impacts, and the zoning, formalized by Charles Booth, was the basis for spatial statistics, as it developed throughout the 20th century.

If the cartography of the city is now complex and rich, it should be noted that economists have taken a long time to leave the “linear city” model, introduced in Hotelling (1929), which has been refined over time, as shown in Figure 10, pitting the residential part (RD – residential district) against the business centre (BD – business district). But that’s another story….

Figure 10 : the different forms of the linear city. Source : Fujita & Thisse (1997).

References:

Booth, Charles (1902) Life and Labour in London. 17 volumes.

Burgess, Ernest (1925). The Growth of the City:An Introduction to a  Research Project.

Choay, Françoise (1980). La règle et le modèle, Paris, Seuil.

Fujita, Masahisa et Thisse, Jacques-Francois. (1997), Économie géographique, Problèmes anciens et nouvelles perspectives. Annales d’Économie et de Statistique, 45, 37-87.

Fureix, Emmanuel et Jarrige, François. (2015), La modernité désenchantée : relire l’histoire du XIXe siècle français, Paris, La Découverte.

Hotelling, Harold (1929). Stability in Competition. The Economic Journal, 39, 41-57.

Latour, Bruno (1989). La science en action. Paris, La Découverte.

Maier, Jessica (2015). Rome, measured and imagined. The University of Chicago Press.

Ogien, Albert (2013). Désacraliser le chiffre dans l’évaluation du secteur public, Versailles, Éditions Quæ,

Söderström, Ola (1996) Paper cities : visual thinking in urban planning. Ecumene, 3, 249-281.

Vaughan, Laura (2018) Mapping Society: The Spatial Dimensions of Social Cartography. UCL Press.

(Petite) histoire du hasard et de la simulation

Entendre « il y a 10% de chances de pluie aujourd’hui » ou « le test médical a une valeur prédictive positive de 75% » montre que les probabilités sont aujourd’hui partout[i]. Une probabilité est une grandeur difficile à appréhender, mais incontournable quand on cherche à théoriser et mesurer le hasard. Et si la théorie mathématique est finalement arrivée très tard, comme le rappelle Hacking (2006), cela n’a pas empêché l’assurance de se développer assez tôt, et d’avoir les premières tables (actuarielles) de mortalité avant même que la « probabilité de décès » ou « l’espérance de vie » n’ait de fondement mathématique. Et de la même manière, de nombreuses techniques ont été inventées pour « générer du hasard », avant l’explosion des méthodes dites de Monte Carlo, en parallèle avec le développement de l’informatique (et du fait qu’une machine pouvait générer du hasard). Continue reading (Petite) histoire du hasard et de la simulation

Démographie historique à l’aide de données généalogiques participatives

Voilà plusieurs mois qu’avec Ewen Gallic on travaille sur des données généalogiques. Le premier papier, Étude de la démographie française du XIXe siècle à partir de données collaboratives de généalogie est fini. Il s’agit d’une note métodologique, décrisant comment on a reconstitué les arbres de ces 2,45 millions de personnes (701 millions d’enregistrements dans lesquels il a fallu faire du ménage), correspondant aux descendants sur 3 générations de personnes nées en France, entre 1800 et 1804.

Pour illustrer l’apport de ces données riches, on a commencé par étudier la mortaiité au cours du XIXème siècle

et noté que, certes, on sous-estime la mortalité des moins de 20 ans, et des personnes très âgées, mais globalement, nos donnéees sont conformes à ce que nous attendions (peut être moins sur la natalité). On a également commencé à étudier la migration, de génération en génération

(ici la proportion de descendants nés dans le même département que leur aieux). Plein d’autres résultats, à lire dans le papier, en ligne sur hal et beaucoup d’autres résultats sur la page github créée par Ewen.

Les marchés prédictifs comme technique de prévision

Les dernières élections présidentielles ont remis en avant l’importance des soudages, utilisés comme outils de prévision, même si les sondeurs s’en défendent. Comme l’ont dit Niels Bohr et Pierre Dac, « la prévision est difficile surtout lorsqu’elle concerne l’avenir », et des solutions alternatives ont été envisagées. Comme c’est la mode, le « big data » a été mentionné (deviner les intentions de vote à partir des tweets ou des informations publiées sur une page Facebook), mais les marchés prédictifs (et les sites de paris) ont aussi été largement utilisés. Mais est-il si simple d’utiliser les prix (ou les cotes) sur ces marchés prédictifs pour en déduire des probabilités ? Et peut-on vraiment les utiliser ?

Sur quoi parie-t-on ?

Comme le rappelle Rhode & Strumpf (2008) utiliser les paris pour connaître les croyances et donc les probabilités qu’un évènement se réalise n’est pas nouveau. Par exemple lors des élections papales au Vatican. En 1549, Matteo Dandolo c(ambassadeur de Vénitie) notait (raconté dans Baumgartner (2003)) que « it is therefore more than clear that the merchants are very well informed about the state of the poll, and that the Cardinals’ attendants in Conclave (i conclavisti) go partners with them in the wagers, which thus causes many tens of thousands of crowns to change hands ». Et les marchés de paris lors des élections ont été populaires aux Etats-Unis, jusqu’à la seconde guerre mondiale. Rhode & Strumpf (2008) avance plusieurs raisons pour la perte d’intérêt au cours de la seconde moitié du XXème siècle : les améliorations des techniques de sondages et la légalisation des paris sur chevaux (avec pour conséquence la disparition du marché illégal des paris).

Plus récemment, avec le développement des sites et donc d’un marché des paris, on a vu apparaître des paris (contrats vendus sur le marché en ligne intrade.com, liquidé en 2015) sur la survenance de tremblement de terre de magnitude dévastatrice (plus de 9 sur l’échelle de Richter), le vainqueur de la cérémonie des Oscars, la diminution de la couverture de la glace en Arctique entre deux années, ou encore l’observation du boson de Higgs. Polgreen et al. (2007) évoquent des paris sur la survenance d’épidémies de grippe. Un autre exemple – que l’on va détailler un peu – est celui du passage d’une loi sur l’énergie (et l’instauration de mesures contraignantes de limitation des émissions de gaz à effet de serre) aux Etats-Unis. Le titre permettant de toucher 1USD en cas d’adoption du projet de loi a été lancé en mai 2009. Le 26 juin 2009, la Chambre des Représentants a adopté (à une courte majorité) le projet de loi (avec pour objectif de réduire de 17 % d’ici à 2020 par rapport à 2005 et de 80 % d’ici à 2050 les émissions de CO2). A ce moment, le titre était échangé pour 50¢, comme le montre le Graphique 1. Il restait à faire approuver la loi par le Sénat. Six mois plus tard, en signant l’accord de la conférence de Copenhague en décembre 2009, les Etats-Unis s’étaient davantage engagés, et un an plus tard (début 2011), le Sénat serait renouvelé lors des élections de mi-mandat. Le gouvernement de Barack Obama espérait faire approuver la session “lame duck” du congrès, les élections de mi-mandat devant probablement renforcer la majorité Républicaine. Mais le jeudi 22 juillet, le chef de file Démocrate au Sénat Harry Reid a annoncé le retrait du projet de loi, faute de majorité pour l’approuver. Ce jour-là, le titre s’échangeait pour 10¢.

Figure 1 : Evolution du prix du passage du contrat sur le passage du ‘American Clean Energy and Security Act’ devant le Sénat (2009-2010)

Source : auteur, données Meng (2017)

Voir un prix comme une probabilité

Avant toutes choses, rappelons qu’une propriété fondamentale des probabilités est la somme des probabilités d’un évènement et de son complémentaire doivent sommer à un. C’est assez logique si les probabilités sont vues comme des « fréquences », correspondant au nombre de cas favorables sur le nombre de cas possibles. Cette interprétation devient plus délicate quand on imagine le cas de variables aléatoires continues, et un passage par la théorie de la mesure s’impose (en particulier pour admettre qu’un « évènement de probabilité nulle » puisse malgré tout survenir). Mais historiquement, d’autres définitions des probabilités avaient été proposées, en particulier par Jacques Bernoulli, qui avait introduit une notion de « probabilité morale » (comme le rappelle Johnson (2017)) que l’on verra réapparaître dans les travaux de de Finetti sous le nom de « probabilité subjective », ou à la même époque, dans les travaux de Frank Ramsey. Pour ce dernier, une probabilité mesure un degré de croyance – « a degree of belief », qui pouvait se mesurer au travers de paris – « through betting odds » – et les prix que les joueurs sont disposés à payer donnent une information quant à ces croyances. Il devient alors possible de déduire des probabilités à partir de prix.

L’idée n’était pas nouvelle, puisqu’elle avait été émise par dans Van Rekeningh in Spelen van Gelucken publié par Christiaan Huyghens en 1655 (en latin sous le titre ‘De Ratiociniis in Aleæ Ludo‘), et utilisée quelques années plus tard, en 1671 par Wilhelmina de Witt dans le contexte d’annuités. Le prix d’un contrat versant une rente jusqu’au décès pouvant être vu à comme une moyenne pondérée d’annuités (à maturité fixe), en observant les prix des différents contrats d’assurance, on pouvait extraire des probabilités interprétées comme des probabilités de survie. Ces mesures de probabilités n’étaient plus construites à partir de fréquences observées, mais construites à partir de prix historiques observés. On retrouvera cette idée en mathématique financière presque trois siècles plus tard, avec le théorème fondamental de valorisation, et les prix d’Arrow-Debreu. Cette littérature est riche et abondante (on peut renvoyer à Charpentier (2014)), mais il existe également des résultats moins connus, sur la création d’un consensus à partir de probabilités, et les applications pour les paris.

Travaillant sur le marché de pari hippique, Eisenberg & Gale (1959) ont obtenu des résultats relativement généraux, qui peuvent se formaliser ainsi : considérons un individu i\in\{1,2,\cdots,I\} qui a la possibilité de parier sur un cheval j\in\{1,2,\cdots,J\} lors d’une course. Chaque joueur possède une somme totale b_i (que l’on normalise de telle sorte que b_1+b_2+\cdots+b_I=1, et qu’il place la somme \beta_{i,j} sur le cheval . Ces montants vérifient la contrainte \beta_{i,1}+\cdots+\beta_{i,J}=b_i , et on note \pi_j la somme total placée sur le cheval j, de telle sorte que \beta_{1,j}+\cdots+\beta_{I,j}=\pi_j. Compte tenu de la contrainte de budget, notons que \pi_1+\cdots+\pi_J=1, de telle sorte que les montants \pi_j peuvent être interprétés comme des probabilités. Supposons maintenant que chaque joueur ait des croyances (que chaque cheval gagne) matérialisées par un vecteur de probabilités \mathbf{p}_i=(p_{i,1},\cdots,p_{i,J}). Eisenberg Gale (1959) définissent alors une situation d’équilibre, dès lors que si \beta_{i,j}>0 alors on doit avoir

p_{i,j}=\pi_j \cdot \max_s\left\{\frac{p_{i,s}}{\pi_s} \right\}" Il montre qu’un tel équilibre existe, et qu’il est unique. On retrouve ici un résultat connu en mathématique financières sous le nom de loi du prix unique, même si les croyances ne sont pas explicitement mentionnées. Et c’est ce résultat qui permet d’interpréter les prix sur un marché prédictif comme des probabilités.

En s’inspirant du modèle de Eisenberg Gale (1959), Manski (2005) obtient une interprétation sur un marché prédictif. Dans ces marchés, on ne peut pas parier sur un cheval dans une course, mais sur la survenance d’un évènement. Considérons un contrat offrant 1$ si un évènement A survient (élection d’un nouveau pape, départ d’un pays de la zone euro, etc), proposé au prix \pi_A. S’il existe un contrat sur l’évènement complémentaire \bar A (on va supposer ici que c’est le cas), il devrait être proposé au prix \pi_{\bar A}=1-\pi_A, sinon un arbitrage serait possible[i]. Considérons maintenant un agent i croyant avec probabilité [lap_i>\pi_A, cette personne a intérêt à investir tout son argent pour acheter son titre, sinon, il a intérêt à investir dans l’autre titre, associé à \bar A. Au niveau agrégé la demande pour les titres de type A et \bar A sera respectivement

\frac{1}{\pi_A}\sum_i b_i P(p_i>\pi_A) et
<br /> \frac{1}{\pi_{\bar A}}\sum_i b_i P[p_i<\pi_A] et on aura un équilibre si \sum_i b_i=\frac{1}{\pi_A}\sum_i b_iP[p_i>\pi_A]=\frac{1}{\pi_{\bar A}}\sum_i b_iP[p_i<\pi_A]Si on suppose que les croyances (p_i) et les richesses (b_i) sont indépendantes, on peut interpréter les prix \pi_A et \pi_{\bar A} comme des probabilités, \pi_{A}=P[p_i>\pi_A] et \pi_{\bar A}=P[p_i<\pi_A] si les agents sont homogènes. Wolfers & Zitzewitz (2007) propose une interprétation différente, plus économique, dans le contexte d’agents ayant une utilité logarithmique.

Marché et pari en ligne

La situation que nous venons de décrire est celle d’un marché parfait, où un équilibre se fait entre les parieurs, et cet équilibre donne des prix interprétables comme des probabilités de gain (on parle éventuellement de « pari mutuel »). Mais la pratique des paris en ligne est souvent différente, car les prix sont proposés par un bookmaker. Comme le nom l’indique, ces personnes – à la base – enregistrent les paiements, mais en pratique, ce sont aussi eux qui fixent les prix. Par exemple lors des dernières élections présidentielles en France, plusieurs sites proposaient de parier sur l’issue du second tour.

Tableau 1 : côtes lors des élections présidentielles de 2017

côtes (sites de paris en ligne) Intention de votes (instituts de sondage)
bet365 comeon betsafe FT Bloomberg BBC
Macron 1/8 1/8 1/7 64% 60.5% 61%
Le Pen 5/1 9/2 9/2 36% 39.5% 39%

Source : http://www.onlinebettingsites.com/2017-french-election/ (données fin avril 2017)

Comme le montre le Tableau 1, les côtes offertes par les bookmakers sont différentes entre les sites de paris. Une cote de 1/8 signifie qu’en pariant 8€, mon gain net sera de 1€ si le candidat l’emporte le jour de l’élection (et je perds ma mise si le candidat perd). La « probabilité » associée est ici \pi_{A}=8/9\sim88.89\% pour E. Macron (mon gain brut étant de 9/8€ pour 1€ misé) et \pi_{\bar A}=1/6\sim16.67\%. Le soucis – on retrouve ici la discussion initié par Jacques Bernoulli – est que ces « probabilités » ne somment pas à 1, puisque 8/9+1/6\sim1.055\%. La différence vient du fait qu’il ne s’agit pas un « juste » prix, le bookmaker s’assurant un rendement certain (de l’ordre de 5,5% de la mise initiale, quel que soit le site choisi). Les vraies probabilités induites par ces cotes sont ici respectivement \pi_A=16/19\sim 84.2\% et \pi_{\bar A}=3/19\sim 15.8\%. Les sites de paris en ligne donneraient une probabilité de gain pour Emmanuel Macron de 84%, alors que les sondages indiquent une intention de vote de 62%, pour les instituts de sondage.

A-t-on réellement une probabilité, ou une fourchette ?

On le voit, les grandeurs données ne sont pas identiques, l’une indiquant une croyance sur une probabilité de gagner, et l’autre une proportion de votes que devrait avoir un candidat. Dans le modèle de Manski, les probabilités induites par la moyenne des croyances et les prix coïncident si on suppose que les agents ont une utilité logarithmique. Comme le montre Wolfers & Zitzewitz (2006), si on suppose que les agents ont une aversion au risque, ça ne sera plus le cas. En particulier, l’article suggère de considérer des bornes pour \pi_A de la forme [\pi_{A}^2,2\pi_A-\pi_A^2, représentées sur la Figure 2, où est appelé « prix de marché » pour reprendre une notation proche de la Figure 1.

Figure 2 : Bornes sur les croyances moyennes

Faut-il utiliser les prédictions des marchés prédictifs ?

On l’a vu lors de la dernière élection en France, beaucoup s’interrogent sur la place pris par les sondages, en particulier en notant qu’il ne s’agissait pas d’une photographie neutre, mais que les électeurs pouvaient choisir leur candidat précisément à la lecture des sondages (la fameuse théorie du « vote utile » en particulier). Le danger à utiliser les marchés prédictifs comme outils de prévision est que ces dangers sont manipulables, comme l’a montré Schröder (2008). Un autre danger – et cela a été noté lors de la dernière élection présidentielle aux Etats-Unis – les côtes sur les sites en ligne de pari reflètent les croyances d’une certaine « élite » (disons les personnes souvent relativement éduquées, et la classe supérieure, suivant la politique), contrairement aux sondages, qui sont supposés être conduits sur un échantillon représentatif des électeurs, et sont supposer refléter les choix du « peuple » (et non pas une opinion ou une croyance). Si les parieurs tiennent comptent de ces sondages, les prix reflètent leurs croyances. Une dernière critique que l’on peut faire est que la majorité des marchés prédictifs représentent des volumes de transactions très faibles.

Les sondages traditionnels font face aujourd’hui à une défiance croissante. Il devient difficile de constituer un échantillon représentatif, le taux de non-réponse atteint des proportions jamais atteint (et nul ne sait si cette non-réponse est corrélée avec des choix électoraux). Et la confusion règne, entre les sondages classiques, et les « sondages en lignes » hébergés sur divers sites d’information. Mais pour reprendre l’anecdote raconté par Scott Armstrong dans Knowledge@Wharton en 2004, si dans une réunion du conseil exécutif d’une grande entreprise on demande à tous les participants de faire des prévisions quant aux résultats pour l’année prochaine, ou si on demande aux participants de parier sur les résultats de l’année prochaine, quels prévisions seront les plus crédibles ?

Références

Arrow, K.J. & Debreu, G. Existence of an equilibrium for a competitive economy, Econometrica, vol. 22,‎ 1954

Baumgartner, F. Behind locked doors: a history of papal elections. 2003

Charpentier, A. L’efficience des marchés : hypothèse de modèle ou fait stylisé ? Risques, 96. 2014

Frank, E., Verbeek, E. & Nüesch, S. Inter-Market Arbitrage in Betting. Economica, 2012.

Johnson, T. Ethics in Quantitative Finance: a pragmatic theory of markets. Wiley, 2017.

Knowledge@Wharton, How Credible Are Polls? Is There a Better Way to Predict Outcomes in Politics and Business, 3 Novembre 2004,

Manski, C.F. Interpreting the Predictions of Prediction Markets. Working Paper 10359, 2004

Meng, K.C. Using a Free Permit Rule to Forecast the Marginal Abatement Cost of Proposed Climate Policy. American Economic Review vol 107, 2017

Polgreen. Use of Prediction Markets to Forecast Infectious Disease Activity. Healthcare Epidemiology, 2007

Ramsey, F.P. Truth and probability, dans The Foundations of Mathematics and other Logical Essays, 1931 .

Rhode, P.W. & Strumpf, K. (2008) Historical Election Betting Markets: an International Perspective.

Schröder, J. Manipulations in Prediction Markets. Universitätsverlag Karlsruhe, 2009.

The Economist, The Future of Futurology, 15 Novembre 2007

Wolfers, J. & Zitzewitz, E. Interpreting Prediction Market Prices as Probabilities. Working Paper 12200, 2006

Wood, T. Do betting markets outperform election polls? Hardly. The Washington Post, Août 2016

[i] Comme l’a montré Franck et al. (2012), comme il existe aujourd’hui plusieurs sites de paris (plusieurs marchés), il peut exister des arbitrages entre les marchés, rarement au sein d’un marché.

Genre et tarification assurantielle: corrélation ou causalité?

L’article La tarification par genre en assurance, corrélation ou causalité ?, coécrit avec Katrien Antonio, devrait paraître dans les jours à venir dans Risques.

La segmentation en matière d’assurance évoque la classification qu’un assureur opère, selon différents critères, en vue de fixer la cotisation, de telle sorte qu’elle reflète, du mieux possible, le risque représenté par chaque assuré. On parlera ainsi de « segmentation tarifaire ». Et classiquement, les modèles économétriques de régression permettent de capturer les variables les plus corrélées avec la fréquence des sinistres, ou leur coût. Mais, comme le notait Davet [2015], « si la corrélation globale entre âge et coût du risque santé est indéniable, les relations de causalité sont moins simples qu’il n’y paraît ». La corrélation, pourtant importante, entre sinistralité et genre en assurance automobile ne peut plus être évoquée pour créer une discrimination tarifaire depuis décembre 2012*. Mais comme nous allons le voir, les objets connectés permettent de récupérer les vraies variables tarifaires (causales) dont le genre n’était alors qu’un proxy.

L’article reprend plusieurs de éléments Unraveling the Predictive Power of Telematics Data in Car Insurance Pricing, de Roel Verbelen, Katrien Antonio et Gerda Claeskens.