Category Archives: Economics

Foundations of Machine Learning, part 1

This post is the fifth one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 4 is online here.

In parallel with these tools developed by, and for economists, a whole literature has been developed on similar issues, centered on the problems of prediction and forecasting. For Breiman (2001a), a first difference comes from the fact that the statistic has developed around the principle of inference (or to explain the relationship linking y to variables \mathbf{x}) while another culture is primarily interested in prediction. In a discussion that follows the article, David Cox states very clearly that in statistic (and econometrics) “predictive success (…) is not the primary basis for model choice“. We will get back here on the roots of automatic learning techniques. The important point, as we will see, is that the main concern of machine learning is related to the generalization properties of a model, i.e. its performance – according to a criterion chosen a priori – on new data, and therefore on non-sample tests.

A learning machine

Today, we speak of “machine learning” to describe a whole set of techniques, often computational, as alternatives to the classical econometric approach. Before characterizing them as much as possible, it should be noted that historically other names have been given. For example, Friedman (1997) proposes to make the link between statistics (which closely resemble econometric techniques – hypothesis testing, ANOVA, linear regression, logistics, GLM, etc.) and what was then called “data mining” (which then included decision trees, methods from the closest neighbours, neural networks, etc.). The bridge between those two cultures corresponds to “statistical learning” techniques described in Hastie et al (2009). But one should keep in mind that machine learning is a very large field of research.

The so-called “natural” learning (as opposed to machine learning) is that of children, who learn to speak, read and play. Learning to speak means segmenting and categorizing sounds, and associating them with meanings. A child also learns simultaneously the structure of his or her mother tongue and acquires a set of words describing the world around him or her. Several techniques are possible, ranging from rote learning, generalization, discovery, more or less supervised or autonomous learning, etc. The idea in artificial intelligence is to take inspiration from the functioning of the brain to learn, to allow “artificial” or “automatic” learning, by a machine. A first application was to teach a machine to play a game (tic-tac-toe, chess, go, etc.). An essential step is to explain the objective it must achieve to win. One historical approach has been to teach the machine the rules of the game. If it allows you to play, it will not help the machine to play well. Assuming that the machine knows the rules of the game, and that it has a choice between several dozen possible moves, which one should it choose? The classical approach in artificial intelligence uses the so-called min-max algorithm using an evaluation function: in this algorithm, the machine searches forward in the possible moves tree, as far as the calculation resources allow (about ten moves in chess, for example). Then, it calculates different criteria (which have been previously indicated to her) for all positions (number of pieces taken, or lost, occupancy of the center, etc. in our example of the chess game), and finally, the machine plays the move that allows it to maximize its gain. Another example may be the classification and recognition of images or shapes. For example, the machine must identify a number in a handwritten handwriting (checks, ZIP code on envelopes, etc). It is a question of predicting the value of a variable y, knowing that a priori y\in\{0,1,2,\cdots,8,9\}. A classical strategy is to provide the machine with learning bases, in other words here millions of labelled (identified) images of handwritten numbers. A simple (and natural) strategy is to use a decision criterion based on the closest neighbors whose labels are known (using a predefined metric).

The method of the closest neighbors (“k-nearest neighbors”) can be described as follows: we consider (as in the previous part) a set of n observations, i. e. pairs (y_i,\mathbf{x}_i) with \mathbf{x}_i\in\mathbb{R}^p. Let us consider a distance \Delta on \mathbb{R}^p (the Euclidean distance or the Mahalanobis distance, for example). Given a new observation \mathbf{x}\in\mathbb{R}^p, let us assume the ordered observations as a function of the distance between the \mathbf{x}_i and \mathbf{x}, in the sense that \Delta(\mathbf{x}_1, \mathbf{x})\leq\Delta(\mathbf{x}_2, \mathbf{x})\leq\cdots\leq\Delta(\mathbf{x}_n, \mathbf{x}) then we can consider as prediction for y the average of the nearest k neighbours,\widehat{m}_k(\mathbf{x})=\frac{1}{k}\sum_{i=1}^k y_iLearning here works by induction, based on a sample (called the learning – or training – sample).

Automatic learning includes those algorithms that give computers the ability to learn without being explicitly programmed (as Arthur Samuel defined it in 1959). The machine will then explore the data with a specific objective (such as searching for the nearest neighbours in the example just described). Tom Mitchell proposed a more precise definition in 1998: a computer program is said to learn from experience E in relation to a task T and a performance measure P, if its performance on T, measured by P, improves with experience E. Task T can be a defect score for example, and performance P can be the percentage of errors made. The system learns if the percentage of predicted defects increases with experience.

As we can see, machine learning is basically a problem of optimizing a criterion based on data (from now on called learning). Many textbooks on machine learning techniques propose algorithms, without ever mentioning any probabilistic model. In Watt et al (2016) for example, the word “probability” is mentioned only once, with this footnote that will surprise and make smile any econometricians, “the logistic regression can also be interpreted from a probabilistic perspective” (page 86). But many recent books offer a review of machine learning approaches using probabilistic theories, following the work of Vaillant and Vapnik. By proposing the paradigm of “probably almost correct” learning (PAC), a probabilistic flavor has been added to the previously very computational approach, by quantifying the error of the learning algorithm (usually in a classification problem).

To be continued (references are online here)…

Probabilistic Foundations of Econometrics, part 4

This post is the fourth one of our series on the history and foundations of econometric and machine learning models. Part 3 is online here.

Goodness of Fit, and Model

In the Gaussian linear model, the determination coefficient – noted R^2 – is often used as a measure of fit quality. It is based on the variance decomposition formula \underbrace{\frac{1}{n}\sum_{i=1}^n (y_i-\bar{y})^2}_{\text{total variance}}=\underbrace{\frac{1}{n}\sum_{i=1}^n (y_i-\widehat{y}_i)^2}_{\text{residual variance}}+\underbrace{\frac{1}{n}\sum_{i=1}^n (\widehat{y}_i-\bar{y})^2}_{\text{explained variance}} The R^2 is defined as the ratio of explained variance and total variance, another interpretation of the coefficient that we had introduced from the geometry of the least squares R^2= \frac{\sum_{i=1}^n (y_i-\bar{y})^2-\sum_{i=1}^n (y_i-\widehat{y}_i)^2}{\sum_{i=1}^n (y_i-\bar{y})^2}The sums of the error squares in this writing can be rewritten as a log-likelihood. However, it should be remembered that, up to one additive constant (obtained with a saturated model) in generalized linear models, deviance is defined by {Deviance}(\widehat{\beta}) = -2\log[\mathcal{L}] which can also be noted Deviance(\widehat{\mathbf{y}}). A null deviance can be defined as the one obtained without using the explanatory variables \mathbf{x}, so that \widehat{y}_i=\overline{y}. It is then possible to define, in a more general context (with a non-Gaussian distribution for y)R^2=\frac{{Deviance}(\overline{y})-{Deviance}(\widehat{\mathbf{y}})}{{Deviance}(\overline{y})}=1-\frac{{Deviance}(\widehat{\mathbf{y}})}{{Deviance}(\overline{y})}However, this measure cannot be used to choose a model, if one wishes to have a relatively simple model in the end, because it increases artificially with the addition of explanatory variables without significant effect. We will then tend to prefer the adjusted R^2,\bar R^2 = {1-(1-R^{2})\cdot{n-1 \over n-p}} = R^{2}-\underbrace{(1-R^{2})\cdot{p-1 \over n-p}}_{\text{penalty}}where p is the number of parameters of the model. Measuring the quality of fit will penalize overly complex models.

This idea will be found in the Akaike criterion, where AIC=Deviance+2\cdot p or in the Schwarz criterion, BIC=Deviance+log(n)\cdot p. In large dimensions (typically p>\sqrt{n}), we will tend to use a corrected AIC, defined by AIC_c=Deviance+2⋅p⋅n/(n-p-1) .

These criterias are used in so-called “stepwise” methods, introducing the set methods. In the “forward” method, we start by regressing to the constant, then we add one variable at a time, retaining the one that lowers the AIC criterion the most, until adding a variable increases the AIC criterion of the model. In the “backward” method, we start by regressing on all variables, then we remove one variable at a time, removing the one that lowers the AIC criterion the most, until removing a variable increases the AIC criterion from the model.

Another justification for this notion of penalty (we will come back to this idea in machine learning) can be the following. Let us consider an estimator in the class of linear predictors, \mathcal{M}=\big\lbrace m:~m(\mathbf{x})=s_h(\mathbf{x})^T\mathbf{y} \text{ where }S=(s(\mathbf{x}_1),\cdots,s(\mathbf{x}_n))^T\text{ is some smoothing matrix}\big\rbrace and assume that y=m_0 (x)+\varepsilon, with \mathbb{E}[\varepsilon]=0 and Var[\varepsilon]=\sigma^2\mathbb{I}, so that m_0 (x)=\mathbb{E}[Y|X=x] . From a theoretical point of view, the quadratic risk, associated with an estimated model \widehat{m}, \mathbb{E}\big[(Y-\widehat{m}(\mathbf{X}))^2\big], is written\mathcal{R}(\widehat{m})=\underbrace{\mathbb{E}\big[(Y-m_0(\mathbf{X}))^2\big]}_{\text{error}}+\underbrace{\mathbb{E}\big[(m_0(\mathbf {X})-\mathbb{E}[\widehat{m}(\mathbf{X})])^2\big]}_{\text{bias}^2}+\underbrace{\mathbb{E}\big[(\mathbb{E}[\widehat{m}(\mathbf{X})]-\widehat{m}(\mathbf{X}))^2\big]}_{\text{variance}} if m_0 is the true model. The first term is sometimes called “Bayes error”, and does not depend on the estimator selected, \widehat{m}.

The empirical quadratic risk, associated with a model m, is here: \widehat{\mathcal{R}}_n(m)=\frac{1}{n}\sum_{i=1}^n (y_i-m(\mathbf{x}_i))^2 (by convention). We recognize here the mean square error, “mse”, which will more generally give the “risk” of the model m when using another loss function (as we will discuss later on). It should be noted that:\displaystyle{\mathbb{E}[\widehat{\mathcal{R}}_n(m)]=\frac{1}{n}\|m_0(\mathbf{x})-m(\mathbf{x})\|^2+\frac{1}{n}\mathbb{E}\big(\|{Y}-m_0(\mathbf{X})\|^2\big)} We can show that:n\mathbb{E}\big[\widehat{\mathcal{R}}_n(\widehat{m})\big]=\mathbb{E}\big(\|Y-\widehat{m}(\mathbf{x})\|^2\big)=\|(\mathbb{I}-\mathbf{S})m_0\|^2+\sigma^2\|\mathbb{I}-\mathbf{S}\|^2so that the (real) risk of \widehat{m} is: {\mathcal{R}}_n(\widehat{m})=\mathbb{E}\big[\widehat{\mathcal{R}}_n(\widehat{m})\big]+2\frac{\sigma^2}{n}\text{trace}(\boldsymbol{S})So, if \text{trace}(\boldsymbol{S})\geq0 (which is not a too strong assumption), the empirical risk underestimates the true risk of the estimator. Actually, we recognize here the number of degrees of freedom of the model, the right-hand term corresponding to Mallow’s C_p, introduced in Mallows (1973) using not deviance but R^2.

Statistical Tests

The most traditional test in econometrics is probably the significance test, corresponding to the nullity of a coefficient in a linear regression model. Formally, it is the test of H_0:\beta_k=0 against H_1:\beta_k\neq 0. The so-called Student test, based on the statistics t_k=\widehat{\beta}_k/se_{\widehat{β}_k}, allows to decide between the two alternatives, using the test p-value, defined by \mathbb{P}[|T|>|t_k|] avec T\overset{\mathcal{L}}{\sim} Std_\nu, where \nu is the number of degrees of freedom of the model (\nu=p+1 for the standard linear model). In large dimension, however, this statistic is of very limited interest, given a significant FDR (“False Discovery Ratio”). Classically, with a level of significance \alpha=0.05, 5% of the variables are falsely significant. Suppose that we have p=100 explanatory variables, but that 5 (only) are really significant. We can hope that these 5 variables will pass the Student test, but we can also expect that 5 additional variables (false positive test) will emerge. We will then have 10 variables perceived as significant, while only half are significant, i.e. an FDR ratio of 50%. In order to avoid this recurrent pitfall in multiple tests, it is natural to use the procedure of Benjamini & Hochberg (1995).

From a correlation to some causal effect

Econometric models are used to implement public policy evaluations. It is therefore essential to fully understand the underlying mechanisms in order to know which variables actually make it possible to act on a variable of interest. But then we move on to another important dimension of econometrics. Jerry Neyman was responsible for the first work on the identification of causal mechanisms, and then Rubin (1974) formalized the test, called the “Rubin causal model” in Holland (1986). The first approaches to the notion of causality in econometrics were based on the use of instrumental variables, models with discontinuity of regression, analysis of differences in differences, and natural or unnatural experiments. Causality is usually inferred by comparing the effect of a policy – or more generally of a treatment – with its counterfactual, ideally given by a random control group. The causal effect of the treatment is then defined as \Delta=y_1-y_0, i.e. the difference between what the situation would be with treatment (noted t=1) and without treatment (noted t=0). The concern is that only y=t\cdot y_1+(1-t)\cdot y_0 and t are observed. In other words, the causal effect of variable t  on t  is not observed (since only one of the two potential variables – y_0 or y_1  is observed for each individual), but it is also individual, and therefore a function of x-covariates. Generally, by making assumptions about the distribution of the triplet (Y_0,Y_1,T) , some parameters of the causal effect distribution become identifiable, based on the density of the observable variables (Y,T) . Classically, we will be interested in the moments of this distribution, in particular the average effect of treatment in the population, \mathbb{E}[\Delta] , or even just the average effect of treatment in the case of treatment \mathbb{E}[\Delta|T=1] . If the result (Y_0,Y_1) is independent of the processing access variable T, it can be shown that \mathbb{E}[\Delta]=\mathbb{E}[Y|T=1]- \mathbb{E} [Y|T=0]. But if this independence hypothesis is not verified, there is a selection bias, often associated with \mathbb{E}[Y_0|T=1]- \mathbb{E} [Y_0|T=0]. Rosenbaum & Rubin (1983) propose to use a propensity to be treated score, p(x)=\mathbb{P}[T=1|X=x] , noting that if variable Y_0\ is independent of access to treatment T conditionally to the explanatory variables X, then it is independent of T  conditionally to the score p(X) : it is sufficient to match them using their propensity score. Heckman et al (2003) thus proposes a kernel estimator on the propensity score, which simply provides an estimator of the effect of the treatment, provided that it is treated.

To be continued next time, we’ll introduce “machine learning techniques” (references mentioned above are online here)

Probabilistic Foundations of Econometrics, part 3

This post is the third one of our series on the history and foundations of econometric and machine learning models. Part 2 is online here.

Exponential family and linear models

The Gaussian linear model is a special case of a large family of linear models, obtained when the conditional distribution of Y (given the covariates) belongs to the exponential family f(y_i|\theta_i,\phi)=\exp\left(\frac{y_i\theta_i-b(\theta_i)}{a(\phi)}+c(y_i,\phi)\right) with \theta_i=\psi(\mathbf{x}_i^T \beta). Functions a, b and c are specified according to the type of exponential law (studied extensively in statistics since Darmoix (1935), as Brown (1986) reminds us), and \psi is a one-to-one mapping that the user must specify. Log-likelihood then has a simple expression \log\mathcal{L}(\mathbf{\theta},\phi|\mathbf{y}) =\frac{\sum_{i=1}^ny_i\theta_i-\sum_{i=1}^nb(\theta_i)}{a(\phi)}+\sum_{i=1}^n c(y_i,\phi) and the first order condition is then written \frac{\partial \log \mathcal{L}(\mathbf{\theta},\phi|\mathbf{y})}{\partial \mathbf{\beta}} = \mathbf{X}^T\mathbf{W}^{-1}[\mathbf{y}-\widehat{\mathbf{y}}]=\mathbf{0} based on Müller’s (2011) notations, where \mathbf{W} is a weight matrix (which depends on \beta). Given the link between \theta and the expectation of Y, instead of specifying the function \psi(\cdot) , we will tend to specify the link function g(\cdot) defined by \widehat{y}=m(\mathbf{x})=\mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=g^{-1} (\mathbf{x}^T \beta) For the Gaussian linear regression we consider an identity link, while for the Poisson regression, the natural link (called canonical) is the logarithmic link. Here, as \mathbf{W} depends on \beta (with \mathbf{W}=diag(\nabla g(\widehat{\mathbf{y}})Var[\mathbf{y}]) there is generally no explicit formula for the maximum likelihood estimator. But an iterative algorithm makes it possible to obtain a numerical approximation. By setting \mathbf{z}=g(\widehat{\mathbf{y}})+(\mathbf{y}-\widehat{\mathbf{y}})\cdot\nabla g(\widehat{\mathbf{y}}) corresponding to the error term of a Taylor development in order 1 of g, we obtain an algorithm of the form\widehat{\beta}_{k+1}=[\mathbf{X}^T \mathbf{W}_k^{-1} \mathbf{X}]^{-1} \mathbf{X}^T \mathbf{W}_k^{-1} \mathbf{z}_kBy iterating, we will define \widehat{\beta}=\widehat{\beta}_{\infty}, and we can show that – with some additional technical assumptions (detailed in Müller (2011)) – this estimator is asymptotically Gaussian, with \sqrt{n}(\widehat{\beta} -\beta)\overset{\mathcal{L}}{\rightarrow} \mathcal{N}(\mathbf{0},I(β)^{-1}) where numerically I(\beta)=\varphi\cdot[\mathbf{X}^T \mathbf{W}_\infty^{-1} \mathbf{X}] .

From a numerical point of view, the computer will solve the first-order condition, and actually, the law of Y does not really intervene. For example, one can estimate a “Poisson regression” even when observations are not integers (but they need to be positive). In other words, the law of Y is only an interpretation here, and the algorithm could be introduced in a different way (as we will see later on), without necessarily having an underlying probabilistic model.

Logistic Regression

Logistic regression is the generalized linear model obtained with a Bernoulli’s law, and a link function which is the quantile function of a logistic law (which corresponds to the canonical link in the sense of the exponential family). Taking into account the form of Bernoulli’s law, econometrics proposes a model for y_i\in\{0,1\}, in which the logarithm of the odds follows a linear model: \log\left(\frac{\mathbb{P}[Y=1\vert \mathbf{X}=\mathbf{x}]}{\mathbb{P}[Y\neq 1\vert \mathbf{X}=\mathbf{x}]}\right)=\beta_0+\mathbf{x}^T\beta or \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=\frac{e^{\beta_0+\mathbf{x}^T\beta}}{1+ e^{\beta_0+\mathbf{x}^T\beta}}=H(\beta_0+\mathbf{x}^T\beta) where H(\cdot)=\exp(\cdot)/(1+exp(\cdot)) is the cumulative distribution function of the logistic law. The estimation of (\beta_0,\beta) is performed by maximizing the likelihood: \mathcal{L}=\prod_{i=1}^n \left(\frac{e^{\mathbf{x}_i^T\mathbf{\beta}}}{1+e^{\boldsymbol{x}_i^T\mathbf{\beta}}}\right)^{y_i}\left(\frac{1}{1+e^{\mathbf{x}_i^T\mathbf{\beta}}}\right)^{1-y_i} It is said to be a linear models because isoprobability curves here are the parallel hyperplanes b+\mathbf{x}^T\beta . Rather than this model, popularized by Berkson (1944), some will prefer the probit model (see Berkson, 1951), introduced by Bliss (1934). In this model: \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=\Phi (\beta_0+\mathbf{x}^T\beta)

where \Phi denotes the distribution function of the reduced centred normal distribution. This model has the advantage of having a direct link with the Gaussian linear model, since y_i=\mathbf{1}(y_i^\star>0) with y_i^\star=\beta_0+\mathbf{x}^T \beta+\varepsilon_i where the residuals are Gaussian, \mathcal{N}(0,\sigma^2). An alternative is to have centered residuals of unit variance, and to consider a latent modeling of the form y_i=\mathbf{1}(y_i^\star>\xi) (where \xi will be fixed). As we can see, these techniques are fundamentally linked to an underlying stochastic model. In the body of the article, we present several alternative techniques – from the learning literature – for this classification problem (with two classes, here 0 and 1).

Regression in high dimension

As we mentioned earlier, the first order condition \mathbf{X}^T (\mathbf{X}\widehat{\beta}-\mathbf{y})=\mathbf{0} is solved numerically by performing a QR decomposition, at a cost which consists in O(np^2) operations (where p is the rank of \mathbf{X}^T \mathbf{X}). Numerically, this calculation can be long (either because p is large or because n is large), and a simpler strategy may be to sub-sample. Let n_s\ll n, and consider a sub-sample size n_s of \{1,\cdots,n\}. Then \widehat{\beta}_s=(\mathbf{X}_s^T \mathbf{X}_s )^{-1} \mathbf{X}_s^T\mathbf{y}_s is a good approximation of \beta as shown by Dhillon et al. (2014). However, this algorithm is dangerous if some points have a high leverage (i.e. L_i=\mathbf{x}_i(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{x}_i^T). Tropp (2011) proposes to transform the data (in a linear way), but a more popular approach is to do non-uniform sub-sampling, with a probability related to the influence of observations (defined by I_i=\widehat{\varepsilon}_iL_i/(1-L_i)^2 , and which unfortunately can only be calculated once the model is estimated).

In general, we will talk about massive data when the data table of size does not fit in the RAM memory of the computer. This situation is often encountered in statistical learning nowadays with very often p\ll n. This is why, in practice, many libraries of algorithms assimilated to machine learning use iterative methods to solve the first-order condition. When the parametric model to be calibrated is indeed convex and semi-differentiable, it is possible to use, for example, the stochastic gradient descent method as suggested by Bottou (2010). This last one allows to free oneself at each iteration from the calculation of the gradient on each observation of our learning base. Rather than making an average descent at each iteration, we start by drawing (without replacement) an observation \mathbf{x}_i among the n available. The model parameters are then corrected so that the prediction made from \mathbf{x}_i is as close as possible to the true value y_i. The method is then repeated until all the data have been reviewed. In this algorithm there is therefore as much iteration as there are observations. Unlike the gradient descent algorithm (or Newton’s method) at each iteration, only one gradient vector is calculated (and no longer n). However, it is sometimes necessary to run this algorithm several times to increase the convergence of the model parameters. If the objective is, for example, to minimize a loss function \ell between the estimator m_\beta (\mathbf{x}) and y (like the quadratic loss function, as in the Gaussian linear regression) the algorithm can be summarized as follows:

  • Step 0: Mix the data
  • Iteration step: For t=1,\cdots, n, we pull i\in\{1,\cdots,n\} without replacement, and we set \beta^{t+1} = \beta^{t} - \gamma_t\frac{ \partial{\ell(y_i,m_{\beta^t}(X_i)) } }{ \partial{ \beta}}

This algorithm can be repeated several times as a whole depending on the user’s needs. The advantage of this method is that at each iteration, it is not necessary to calculate the gradient on all observations (more sum). It is therefore suitable for large databases. This algorithm is based on a convergence in probability towards a neighborhood of the optimum (and not the optimum itself).

(references will be given in the very last post of that series) To be continued

Probabilistic Foundations of Econometrics, part 2

This post is the second one of our series on the history and foundations of econometric and machine learning models. Part 1 is online here.

Geometric Properties of this Linear Model

Let’s define the scalar product in \mathbb{R}^n, ⟨\mathbf{a},\mathbf{b}⟩=\mathbf{a}^T\mathbf{b}, and let’s note \|\cdot\| the associated Euclidean standard, \|\mathbf{a}\|=\sqrt{\mathbf{a}^T\mathbf{a}} (denoted \|\cdot\|_{\ell_2} in the next post). Note \mathcal{E}_X the space generated by all linear combinations of the \mathbf{X} components (adding the constant). If the explanatory variables are linearly independent, \mathbf{X} is a full (column) rank matrix and \mathcal{E}_X is a space of dimension p+1. Let’s assume from now on that the variables \mathbf{x}  and y are centered here. Note that no law hypothesis is made in this section, the geometric properties are derived from the properties of expectation and variance in the set of finite variance variables.

With this notation, it should be noted that the linear model is written m(\mathbf{x})=⟨\mathbf{x},\beta⟩. The space H_z=\{\mathbf{x}\in\mathbb{R}^{p+1}:m(\mathbf{x})=z\} is a hyperplane (affine) that separates the space in two. Let’s define the orthogonal projection operator on \mathcal{E}_X, \Pi_X =\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T. Thus, the forecast that can be made for it is: \widehat{\mathbf{y}}=\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T\mathbf{y}=\Pi_X\mathbf{y}. As, \widehat{\varepsilon}=\mathbf{y}-\widehat{\mathbf{y}}=(\mathbb{I}-\Pi_X)\mathbf{y}=\Pi_{X^\perp}\mathbf{y}, we note that \widehat{\varepsilon}\perp\mathbf{x}, which will be interpreted as meaning that residuals are a term of innovation, unpredictable in the sense that \Pi_{X }\widehat{\varepsilon}=\mathbf{0}. The Pythagorean theorem is written here: \Vert \mathbf{y} \Vert^2=\Vert \Pi_{ {X}}\mathbf{y} \Vert^2+\Vert \Pi_{ {X}^\perp}\mathbf{y} \Vert^2=\Vert \Pi_{ {X}}\mathbf{y}\Vert^2+\Vert \mathbf{y}-\Pi_{ {X}}\mathbf{y}\Vert^2=\Vert\widehat{\mathbf{y}}\Vert^2+\Vert\widehat{\mathbf{\varepsilon}}\Vert^2which is classically translated in terms of the sum of squares: \underbrace{\sum_{i=1}^n y_i^2}_{n\times\text{total variance}}=\underbrace{\sum_{i=1}^n \widehat{y}_i^2}_{n\times\text{explained variance}}+\underbrace{\sum_{i=1}^n (y_i-\widehat{y}_i)^2}_{n\times\text{residual variance}} The coefficient of determination, R^2, is then interpreted as the square of the cosine of the angle \theta between \mathbf{y} and \Pi_X \mathbf{y} : R^2=\frac{\Vert \Pi_{{X}} \mathbf{y}\Vert^2}{\Vert \mathbf{y}\Vert^2}=1-\frac{\Vert \Pi_{ {X}^\perp} \mathbf{y}\Vert^2}{\Vert \mathbf {y}\Vert^2}=\cos^2(\theta)An important application was obtained by Frish & Waugh (1933), when the explanatory variables are divided into two groups, \mathbf{X}=[\mathbf{X}_1 |\mathbf{X}_2], so that the regression becomes y=\beta_0+\mathbf{X}_1 β_1+\mathbf{X}_2 β_2+\varepsilon. Frish & Waugh (1933) showed that two successive projections could be considered. Indeed, if \mathbf{y}_2^\star=\Pi_{X_1^\perp} \mathbf{y} and X_2^\star=\Pi_{X_1^\perp}\mathbf{X}_2, we can show that \widehat{\beta} _2=[{\mathbf{X}_2^\star}^T \mathbf{X}_2^\star]^{-1}{\mathbf{X}_2^\star}^T \mathbf{y}_2^\star In other words, the overall estimate is equivalent to the combination of independent estimates of the two models if \mathbf{X}_2^\star=\mathbf{X}_2, i.e. \mathbf{X}_2\in \mathcal{E}_{X_1}^\perp, which can be noted \mathbf{x}_1\perp\mathbf{x}_2 We obtain here the Frisch-Waugh theorem which guarantees that if the explanatory variables between the two groups are orthogonal, then the overall estimate is equivalent to two independent regressions, on each of the sets of explanatory variables. This is a theorem of double projection, on orthogonal spaces. Many results and interpretations are obtained through geometric interpretations (fundamentally related to the links between conditional expectation and the orthogonal projection in space of variables of finite variance).

This geometric interpretation might help to get a better understanding of the problem of under-identification, i.e. the case where the real model would be y_i=\beta_0+ \mathbf{x}_1^T \beta_1+\mathbf{x}_2^T \beta_2+\varepsilon_i, but the estimated model is y_i=b_0+\mathbf{x}_1^T \mathbf{b}_1+\eta_i. The maximum likelihood estimator of \mathbf{b}_1 is \widehat{\mathbf{b}}_1=\mathbf {\beta}_1 + \underbrace{ (\mathbf {X}_1^T\mathbf {X}_1)^{-1} \mathbf {X}_1^T \mathbf {X}_{2} \mathbf{\beta}_2}_{\mathbf{\beta}_{12}}+\underbrace{(\mathbf{X}_1^{T}\mathbf{X}_1)^{-1} \mathbf{X}_1^T\varepsilon}_{\nu}so that \mathbb{E}[\widehat{\mathbf{b}}_1]=\beta_1+\beta_{12}, the bias ( \beta_{12}) being null only in the case where \mathbf{X}_1^T \mathbf{X}_2=\mathbf{0} (i. e. \mathbf{X}_1\perp \mathbf{X}_2 ): we find here a consequence of the Frisch-Waugh theorem.

On the other hand, over-identification corresponds to the case where the real model would be y_i=\beta_0+\mathbf{x}_1^T \beta_1+\varepsilon_i, but the estimated model is y_i=b_0+ \mathbf{x}_1^T \mathbf{b} _1+\mathbf{x}_2^T \mathbf{b}_2+\eta_i. In this case, the estimate is unbiased, in the sense that \mathbb{E}[\widehat{\mathbf{b}}_1]=\beta_1 but the estimator is not efficient. Later on, we will discuss an effective method for selecting variables (and avoid over-identification).

From parametric to non-parametric

We can rewrite equation (4) in the form \widehat{\mathbf{y}}=\Pi_X\mathbf{y} which helps us to see the forecast directly as a linear transformation of the observations. More generally, a linear predictor can be obtained by considering m(\mathbf{x})=\mathbf{s}_{\mathbf{x}}^T \mathbf{y}, where \mathbf{s}_{\mathbf{x}} is a weight vector, which depends on \mathbf{x}, interpreted as a smoothing vector. Using the vectors \mathbf{s}_{\mathbf{x}_i}, calculated from the observations \mathbf{x}_i, we obtain a matrix \mathbf{S} of size n\times n, and \widehat{\mathbf{y}}=\mathbf{S}\mathbf{y}. In the case of the linear regression described above, \mathbf{s}_{\mathbf{x}}=\mathbf{X}[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{x}, and in that case \text{trace}(\mathbf{S}) is the number of columns in the \mathbf{X} matrix (the number of explanatory variables). In this context of more general linear predictors, \text{trace}(\mathbf{S}) is often seen as equivalent to the number of parameters (or complexity, or dimension, of the model), and \nu=n-\text{trace}(\mathbf{S}) is then the number of degrees of freedom (see Ruppert et al., 2003; Simonoff, 1996). The principle of parsimony says that we should minimize this dimension (the trace of the matrix \mathbf{S}) as much as possible. But in the general case, this dimension is more to obtain, explicitely.

The estimator introduced by Nadaraya (1964) and Watson (1964), in the case of a simple non-parametric regression, is also written in this form since\widehat{m}_h(x)=\mathbf{s}_{x}^T\mathbf{y}=\sum_{i=1}^n \mathbf{s}_{x,i}y_iwhere\mathbf{s}_{x,i}=\frac{K_h(x-x_i)}{K_h(x-x_1)+\cdots+K_h(x-x_n)} where K(\cdot) is a kernel function, which assigns a value that is lower the closer x_i is to x, and h>0 is the bandwidth. The introduction of this metaparameter h is an important issue, as it should be chosen wisely. Using asymptotic developments, we can show that if X has density f, \text{biais}[\widehat{m}_h(x)]=\mathbb{E}[\widehat{m}_h(x)]-m(x)\sim {h^2}\left(\frac{C_1 }{2}m''(x)+C_2 m'(x)\frac{f'(x)}{f(x)}\right)and \displaystyle{{\text{Var}[\widehat{m}_h(x)]\sim\frac{C_3}{{nh}}\frac{\sigma(x)}{f(x)}}}for some constants that can be estimated (see Simonoff (1996) for a discussion). These two functions evolve inversely with h, as shown in Figure 1 (where the metaparameter on the x-axis is here, actually, h^{-1}). Keep in ming that we will see a similar graph in the context of machine learning models.

Figure 1. Choice of meta-parameter and the Goldilocks problem: it must not be too large (otherwise there is too much variance), nor too small (otherwise there is too much bias).

The natural idea is then to try to minimize the mean square error, the MSE, defined as bias[\widehat{m}_h (x)]^2+Var[\widehat{m}_h (x)], and them integrate over x, which gives an optimal value for h of the form h^\star=O(n^{-1/5}) , and reminds us of Silverman’s rule – see Silverman (1986). In larger dimensions, for continuous \mathbf{x} variables, a multivariate kernel with matrix bandwidth \mathbf{H} can be used, and \mathbb{E}[\widehat{m}_{\mathbf{H}}(\mathbf{x})]\sim m(\mathbf{x})+\frac{C_1}{2}\text{trace}\big(\mathbf{H}^Tm''(\mathbf{x})\mathbf{H}\big)+C_2\frac{m'(\boldsymbol{x})^T\mathbf{H}\mathbf{H}^T \nabla f(\mathbf{x})}{f(\mathbf{x})}while\text{Var}[\widehat{m}_{\mathbf{H}}(\mathbf{x})]\sim\frac{C_3}{n~\text{det}(\mathbf{H})}\frac{\sigma(\mathbf{x})}{f(\mathbf{x})}
If \mathbf{H} is a diagonal matrix, with the same term h  on the diagonal, then h^\star=O(n^{-1/(4+dim(\mathbf{x}))}. However, in practice, there will be more interest in the integrated version of the quadratic error, MISE(\widehat{m}_{h})=\mathbb{E}[MSE(\widehat{m}_{h}(X))]=\int MSE(\widehat{m}_{h}(x))dF(x)and we can prove that MISE[\widehat{m}_h]\sim \overbrace{\frac{h^4}{4}\left(\int x^2k(x)dx\right)^2\int\big[m''(x)+2m'(x)\frac{f'(x)}{f(x)}\big]^2dx}^{\text{bias}^2} +\overbrace{\frac{\sigma^2}{nh}\int k^2(x)dx \cdot\int\frac{dx}{f(x)}}^{\text{variance}}as n→∞ and nh→∞. Here we find an asymptotic relationship that again recalls Silverman’s (1986) order of magnitude, h^\star =n^{-\frac{1}{5}}\left(\frac{C_1\int \frac{dx}{f(x)}}{C_2\int \big[m''(x)+2m'(x)\frac{f'(x)}{f(x)}\big]dx}\right)^{\frac{1}{5}}The main problem here, in practice, is that many of the terms in the expression above are unknown. Automatic learning offers computational techniques, when the econometrician used to searching for asymptotic (mathematical) properties.

To be continued (references mentioned above are online here)…

Probabilistic Foundations of Econometrics, part 1

In a series of posts, I wanted to get into details of the history and foundations of econometric and machine learning models. It will be some sort of online version of our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics. This is the first one…

The importance of probabilistic models in economics is rooted in Working’s (1927) questions and the attempts to answer them in Tinbergen’s two volumes (1939). The latter have subsequently generated a great deal of work, as recalled by Duo (1993) in his book on the foundations of econometrics, and more particularly in the first chapter “The Probability Foundations of Econometrics”. It should be recalled that Trygve Haavelmo was awarded the Nobel Prize in Economics in 1989 for his “clarification of the foundations of the probabilistic theory of econometrics”. Because as Haavelmo (1944) (initiating a profound change in econometric theory in the 1930s, as recalled in Morgan’s Chapter 8 (1990)) showed, econometrics is fundamentally based on a probabilistic model, for two main reasons. First, the use of statistical quantities (or “measures”) such as means, standard errors and correlation coefficients for inferential purposes can only be justified if the process generating the data can be expressed in terms of a probabilistic model. Second, the probability approach is relatively general, and is particularly well suited to the analysis of “dependent” and “non-homogeneous” observations, as they are often found on economic data.We will then assume that there is a probabilistic space (\Omega,\mathcal{F},\mathbb{P}) such that observations (y_i,\mathbf{x}_i) are seen as realizations of random variables (Y_i, \mathbf{X}_i) . In practice, however, we are not very interested in the joint law of the couple (Y, \mathbf{X}) : the law of \mathbf{X} is unknown, and it is the law of Y conditional on \mathbf{X} that will be interested in. In the following, we will note x a single observation, \mathbf{x} a vector of observations, X a random variable, and \mathbf{X} a random vector. Abusively, \mathbf{X} may also designate the matrix of individual observations (denoted \mathbf{x}_i), depending on the context.

Foundations of mathematical statistics

As recalled in Vapnik’s (1998) introduction, inference in parametric statistics is based on the following belief: the statistician knows the problem to be analyzed well, in particular, he knows the physical law that generates the stochastic properties of the data, and the function to be found is written via a finite number of parameters[1]. To find these parameters, the maximum likelihood method is used. The purpose of the theory is to justify this approach (by discovering and describing its favorable properties). We will see that in learning, philosophy is very different, since we do not have a priori reliable information on the statistical law underlying the problem, nor even on the function we would like to approach (we will then propose methods to construct an approximation from the data at our disposal, as in (1998)). A “golden age” of parametric inference, from 1930 to 1960, laid the foundations for mathematical statistics, which can be found in all statistical textbooks, including today. As Vapnik (1998) states, the classical parametric paradigm is based on the following three beliefs:

  1. To find a functional relationship from the data, the statistician is able to define a set of functions, linear in their parameters, that contain a good approximation of the desired function. The number of parameters describing this set is small.
  2. The statistical law underlying the stochastic component of most real-life problems is the normal law. This belief has been supported by reference to the central limit theorem, which stipulates that under large conditions the sum of a large number of random variables is approximated by the normal law.
  3. The maximum likelihood method is a good tool for estimating parameters.

In this section we will come back to the construction of the econometric paradigm, directly inspired by that of classical inferential statistics.

Conditional laws and likelihood

Linear econometrics has been constructed under the assumption of individual data, which amounts to assuming independent variables (Y_i, \mathbf{X}_i) (if it is possible to imagine temporal observations – then we would have a process (Y_t, \mathbf{X}_t) – but we will not discuss time series here). More precisely, we will assume that, conditionally to the explanatory variables \mathbf{X}_i, the variables Y_i are independent. We will also assume that these conditional laws remain in the same parametric family, but that the parameter is a function of \mathbf{x}. In the Gaussian linear model it is assumed that: (Y\vert \mathbf{X}=\mathbf{x})\overset{\mathcal{L}}{\sim}\mathcal{N}(\mu(\mathbf{x}),\sigma^2)~~~~ (1)where \mu(\mathbf{x})=\beta_0+\mathbf{x}^T\mathbf{\beta} and \mathbf{\beta}\in\mathbb{R}^{p}.

It is usually called a ‘linear’ model since \mathbb{E}[Y\vert \mathbf{X}=\mathbf{x}]=\beta_0+\mathbf{x}^T\mathbf{\beta} is a linear combination of covariates[2]. It is said to be a homoscedastic model if Var[Y|\mathbf{X}=\mathbf{x}]=\sigma^2, where \sigma^2 is a positive constant. To estimate the parameters, the traditional approach is to use the Maximum Likelihood estimator, as initially suggested by Ronald Fisher. In the case of the Gaussian linear model, log-likelihood is written:  \log\mathcal{L}(\beta_0, \mathbf{\beta},\sigma^2\vert \mathbf{y},\mathbf{x}) = -\frac{n}{2}\log[2\pi\sigma^2] - \frac{1}{2\sigma^2}\sum_{i=1}^n (y_i-\beta_0-\mathbf{x}_i^T\mathbf{\beta})^2Note that the term on the right, measuring a distance between the data and the model, will be interpreted as deviance in generalized linear models. Then we will set: (\widehat{\beta}_0,\widehat{\mathbf{\beta}},\widehat{\sigma}^2)=\text{argmax}\left\lbrace\log\mathcal{L}(\beta_0, \mathbf{\beta},\sigma^2\vert \mathbf{y},\mathbf{x})\right\rbraceThe maximum likelihood estimator is obtained by minimizing the sum of the error squares (the so-called “least squares” estimator) that we will find in the “machine learning” approach.

The first order conditions allow to find the normal equations, whose matrix writing is \mathbf{X}^T[\mathbf{y}-\mathbf{X}\mathbf{\beta}]=\mathbf{0}, which can also be written (\mathbf{X}^T \mathbf{X})\mathbf{\beta}=\mathbf{X}^T \mathbf{y}. If \mathbf{X} is a full (column) rank matrix, then we find the classical estimator:\widehat{\mathbf{\beta}}=(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}=\mathbf{\beta}+(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^{-1}\mathbf{\varepsilon}~~~(2)using residual-based writing (as often in econometrics), y=\mathbf{x}^T\mathbf{\beta}+\varepsilon. Gauss Markov’s theorem ensures that this estimator is the unbiased linear estimator with minimum variance. It can then be shown that \widehat{\mathbf{\beta}}\sim\mathcal{N}(\mathbf{\beta},\sigma^2(\mathbf{X}^T\mathbf{X})^{-1}), and in particular, if we simply need the first two moments : \mathbb{E}[\widehat{\mathbf{\beta}}]=\mathbf{\beta}~~~Var[\widehat{\mathbf{\beta}}]=\sigma^2 [\mathbf{X}^T\mathbf{X}]^{-1}In fact, the normality hypothesis makes it possible to make a link with mathematical statistics, but it is possible to construct this estimator given by equation (2) without that Gaussian assumption. Hence, if we assume that Y|\mathbf{X} has the same distribution as \mathbf{x}^T\mathbf{\beta}+\varepsilon, where \mathbb{E}[\varepsilon]=0, Var[\varepsilon]=\sigma^2 and Cov[X_j,\varepsilon]=0 for all j, then \widehat{\mathbf{\beta}} is an unbiased estimator of \mathbf{\beta} with smallest variance[3] among unbiased linear estimators. Furthermore, if we cannot get normality at finite distance, asymptotically this estimator is Gaussian, with \sqrt{n}(\widehat{\mathbf{\beta}}-\mathbf{\beta})\overset{\mathcal{L}}{\rightarrow}\mathcal{N}(\mathbf{0},\mathbf{\Sigma})as n\rightarrow\infty, for some matrix \mathbf{\Sigma}.
The condition of having a full rank \mathbf{X} matrix can be (numerically) strong in large dimensions. If it is not satisfied, (\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T does not exist. If \mathbb{I} denotes the identity matrix, however, it should be noted that (\mathbf{X}^T \mathbf{X}+\lambda\mathbb{I})^{-1}\mathbf{X}^T still exists, whatever \lambda>0. This estimator is called the ridge estimator of level \lambda (introduced in the 1960s by Hoerl (1962), and associated with a regularization studied by Tikhonov (1963)). This estimator naturally appears in a Bayesian econometric context.

Residuals

It is not uncommon to introduce the linear model from the distribution of the residuals, as we mentioned earlier. Also, equation (1) is written as often: y_i=\beta_0+\mathbf{x}_i^T\mathbf{\beta}+\varepsilon_i~~~~(3)where \varepsilon_i’s are realizations of independent and identically distributed random variables (i.i.d.) from some \mathcal{N}(0,\sigma^2) distribution. With a vector notation, we will write \mathbf{\varepsilon}\overset{\mathcal{L}}{\sim}\mathcal{N}(\mathbf{0},\sigma^2\mathbb{I}) . The estimated residuals are defined as: \widehat{\varepsilon}_i =y_i-[\widehat{\beta}_0+\mathbf{x}_i^T\widehat{\mathbf{\beta}}] Those (estimated) residuals are basic tools for diagnosing the relevance of the model.

An extension of the model described by equation (1) has been proposed to take into account a possible heteroscedastic character: (Y\vert \mathbf{X}=\mathbf{x})\overset{\mathcal{L}}{\sim}\mathcal{N}(\mu(\mathbf{x}),\sigma^2(\mathbf{x}))where \sigma^2(\mathbf{x}) is a positive function of the explanatory variables. This model can be rewritten as: y_i=\beta_0+\mathbf{x}_i^T\mathbf{\beta}+\sigma^2(\mathbf{x}_i)\cdot\varepsilon_iwhere residuals are always i.i.d., with unit variance, \varepsilon_i=\frac{y_i-[\beta_0+\mathbf{x}_i^T\mathbf{\beta}]}{\sigma(\mathbf{x}_i)} While residuals based equations are popular in linear econometrics (when the dependent variable is continuous), it is no longer popular in counting models, or logistic regression.

However, writing using an error term (as in equation (3)) raises many questions about the representation of an economic relationship between two quantities. For example, it can be assumed that there is a relationship (linear to begin with) between the quantities of a traded good, q and its price p. This allows us to imagine a supply equationq_i=\beta_0+\beta_1 p_i+u_i(u_i being an error term) where the quantity sold depends on the price, but in an equally legitimate way, one can imagine that the price depends on the quantity produced (what one could call a demand equation), p_i=\alpha_0+\alpha_1 q_i+v_i(v_i denoting another error term). Historically, the error term in equation (3) could be interpreted as an idiosyncratic error on the variable y, the so-called explanatory variables being assumed to be fixed, but this interpretation often makes the link between an economic relationship and a complicated economic model difficult, the economic theory speaking abstractly about a relationship between a magnitude, the econometric model imposing a specific shape (what magnitude is y and what magnitude is x) as shown in more detail in Morgan (1990) Chapter 7.

(references mentioned above are online here). To be continued…

[1] This approach can be compared to structural econometrics, as presented for example in Kean (2010).

[2] Here, we will try to distinguish \beta_0, the intercept, and the other parameters \mathbf{\beta}, since they are considered differently in many extensions (e.g. regularization). Nevertheless, in many expressions \mathbf{\beta} will denote the joint vector (\beta_0, \mathbf{\beta}), for general formulas, to avoid too heavy notations.

[3] In the sense that the difference between variance matrices is a positive matrix.

“Le mécanisme Cat’ Nat’ va à l’encontre de la prévention”

Au début du mois, https://newsassurancespro.com/ publiait une entrevue… je me permets de remettre ici quelques éléments,

Où en est la modélisation du risque catastrophe en France ?

Beaucoup de progrès ont été faits, notamment par l’apport de l’analyse de géologues et des hydrologues qui ont modélisé l’exposition théorique aux risques catastrophes. Nous disposons aujourd’hui de cartes incroyablement précises qui permettent de connaître la nature des risques rue par rue en France, aussi bien sur l’inondation que sur la sécheresse. Pour autant, cette vision statique résiste mal à une approche en termes de flux. Ainsi, nous avons des historiques de données parmi les plus fournis du monde, mais l’existence d’un simple barrage installé en amont d’une commune peut modifier la réalité du risque sur le terrain.

Le régime des catastrophes naturelles actuel est-il viable ?

Oui. Le régime des catastrophes naturelles est viable. Parce que l’Etat intervient en dernier ressort. Cela solvabilise le dispositif. Cependant, le mécanisme va à l’encontre de la prévention. Certes, le système d’indemnisation, et plus particulièrement de franchises, tient compte de la mise en place ou non de Plan de prévention des risques naturels (PPRN) dans les communes. Mais dans un schéma ex-post, c’est-à-dire après la survenue de l’évènement.

La prévention est le maillon fondamental qui manque au régime tel qu’il est conçu actuellement. Toute réforme qui introduirait une dose de prévention irait dans le bon sens.

La gestion des risques naturels est-elle bien organisée en France ?

Assureurs, communes, régions, Etat… la gestion des risques naturels fait intervenir beaucoup d’interlocuteurs différents. Cela manque de coordination. Ainsi, mettre en place un plan de prévention du risque inondation (PRI) dans une commune est une bonne chose. Mais s’il n’est pas conçu en tenant compte des communes alentours, cela en limite la portée. Il existe des mécanismes de dépendance sur le risque inondation qui ne sont pas circonscrits aux limites géographiques d’une ville ou d’un village.

Faudrait-il aller vers plus de segmentation du risque ?

Force est de constater que le niveau de prime payé par les assurés en catastrophes naturelles est décorrélé du risque auquel ils sont réellement exposés. Il y a quelques mois, notre chaire de recherche (sur la segmentation et la mutualisation) s’était penchée sur la question. Nous devions réfléchir sur les limites éthiques de la segmentation du risque catastrophes. Les calculs actuariels de l’exposition au risque montraient des écarts monstrueux entre les Bouches-du-Rhône et l’Orne par exemple. Le sujet semble tabou chez les assureurs. Mais tarifer en fonction de la réelle nature du risque serait un pourtant puissant outil de prévention. Les entreprises réfléchiraient à deux fois avant de s’installer en zone inondable.

La suite en ligne sur https://newsassurancespro.com/

Retour sur un résultat de Feller (1957) sur l’homogénéité et l’hétérogénéité des risques

Alors que je discutais avec quelques personnes restées dans la salle suite à mon exposé, la semaine passée, Daniel Zajdenweber était venu me rappeler un joli petit résultat de William Feller, sur l’homogénéité des risques.

Supposons ici que l’on puisse avoir au plus un accident par an, et notons X_i l’indicatrice de survenance d’un accident. Si les risques sont homogènes, les variables suivent des loi de Bernoulli B(p). Mais on peut aussi supposer la population hétérogène, avec des lois de Bernoulli B(p_i). Soit S_n le nombre total de sinistres quand on considère n risques. Alors Var[S_n]=\sum_{i=1}^n p_i(1-p_i)=n\bar{p}-\sum_{i=1}^n p_i^2 \bar{p} est la probabiité moyenne d’avoir un accident. Un peu de calcul montre que Var[S_n] est maximale quand le dernier terme est minimal, ce qui est obtenu quand tous les p_i sont égaux.
Hoeffding (1956) note dans son introduction que

it is well known that the maximum of the variance of S is attained when p_i=\bar{p}

Feller (1957) souligne, de son côté, que l’on a un

surprising result that the variability of p_i‘s, or lack of uniformity, decreases the magnitude of chance fluctuations

et il prend alors une interprétation actuarielle pour décrire ce résultat,

the number of annual fires in a community may be treated as a random variable; for a given average number, the variability is maximal if all households have the same probability of fire

Il n’y a ici pas de problème mathématique dans le résultat. En effet si on cherche

\min\left\lbrace\sum_{i=1}^n p_i^2\right\rbrace\text{ s.c. }\sum_{i=1}^n p_i=\text{ constant}

le maximum est obtenu quand tous les p’s sont égaux, puisque la condition du premier ordre sur le Lagrangien donne

\frac{\partial}{\partial p_i}\left(\sum_{i=1}^n p_i^2+\lambda\sum_{i=1}^n (p_i-\bar{p})\right)=2p_i+\lambda=0

On peut aussi faire une démontration par récurrence. On peut d’ailleurs regarder numériquement ce que ca donne, avec des probabilité qui augmente linéairement dans la population

n=10
p=(1:n)/(n+1)
rsim1=function(i) sum(runif(n)<=p)
X1=unlist(lapply(1:1e6,rsim1))
barplot(table(X1)/1e6,col=rgb(1,0,0,.4))

ou bien le cas uniforme

barp=rep(mean(p),n)
rsim2=function(i) sum(runif(n)<=barp)
X2=unlist(lapply(1:1e6,rsim2))
barplot(table(X2)/1e6,col=rgb(0,0,1,.4))

qui présente plus de dispersion, plus de variance

> var(X1)
[1] 1.820453
> var(X2)
[1] 2.498112

que l’on peut représenter aussi sur un même graphique

barplot(table(X1)/1e6,col=rgb(1,0,0,.4))
barplot(table(X2)/1e6,col=rgb(0,0,1,.4),add=TRUE)

En fait, on retrouve un résultat plus général dans le theorème 4 de Ma (1998), repris dans Denuit & Frostig (2007): si p \preceq q (pour l’ordre de la majorization), et si X_i et Y_i suivent respectivement des lois B(p_i) et B(q_i), alors \sum_{i=1}^n X_i est dominé – au sens de l’ordre convexe – par \sum_{i=1}^n Y_i. Comme l’explique alors Denuit & Frostig (2007)

we see that the heterogeneity decreases the dangerousness of the portfolio (as measured by the convex order between the corresponding total claim costs). This result is not as surprising as it might seem at first sight. (…) The safer case for such a portfolio is the one with p_1=\sum q_i, provided this sum is less than one, and p_2=\cdots=p_n=0, while the most dangerous case is the homogeneous portfolio with all the claim occurrence probabilities equal to \bar{p}. (…) An obvious example is furnished by a life insurance portfolio with equal benefits. In such a case, the dangerousness of the portfolio decreases as its degree of heterogeneity increases.

Autrement dit, la loi Binomiale est le maximum au sens de l’ordre convexe parmi toutes les sommes de Bernoulli (en conservant la moyenne). Comme me l’expliquait Michel Denuit, c’est un des seuls cas où l’hétérogénéité réduit le risque. Par exemple avec des variables qui suivent une loi de Poisson, P(\lambda_i)

Var[S_n]=\sum_{i=1}^n \lambda_i=n\bar{\lambda}=E[S_n]

autrement dit, la non-uniformité du nombre de sinistres espéré n’a aucun impact ici sur la dispersion de la somme.

rsim1=function(i) sum(rpois(n,p))
X1=unlist(lapply(1:1e6,rsim1))
rsim2=function(i) sum(rpois(n,barp))
X2=unlist(lapply(1:1e6,rsim2))
barplot(table(X1)/1e6,col=rgb(1,0,0,.4))
barplot(table(X2)/1e6,col=rgb(0,0,1,.4),add=TRUE)

(les deux distributions coïncident ici parfaitement).

Pourtant, ce résultat de Feller (et de Hoeffding) semble contre-intuitif car il semble contredire la lecture heuristique de la décomposition de la variance.

Var[S_n]=\mathbb{E}[Var[S_n|P]]+Var[\mathbb{E}[S_n|P]]

soit

Var[S_n]=\mathbb{E}[nP(1-P)]+Var[nP]\geq n\bar{p}(1-\bar{p})

qui est minimal – à \mathbb{E}[P] fixé – lorsque Var[P] est nul, c’est à dire justement quand il n’y a pas d’hétérogénéité. Si on regarde plus précisément ici ce qu’on appelait hétérogénéité correspondait à différentes valeurs pour les probabilités, déterministes : S_n=X_1+\cdots+X_n avec X_i\sim B(p_i). En revanche, dans l’expression ci-dessus, S_n\vert P=X_1+\cdots+X_n avec X_i\sim B(P)P est une variable à valeurs de [0,1]. Ici, S_n suit une loi B(n,P), où P est une variable aléatoire.

Par exemple, on peut considérer

rsim1=function(i) sum(runif(n)<(barp+rnorm(1)/10))
X1=unlist(lapply(1:1e6,rsim1))
rsim2=function(i) sum(runif(n)<barp)
X2=unlist(lapply(1:1e6,rsim2))
barplot(table(X1)/1e6,col=rgb(1,0,0,.4))
barplot(table(X2)/1e6,col=rgb(0,0,1,.4),add=TRUE)

(on a plus de variance avec le mélange que dans le cas “homogène”) ou encore

rsim1=function(i) sum(runif(n)<sample(p,size=1,replace=TRUE))
X1=unlist(lapply(1:1e6,rsim1))
rsim2=function(i) sum(runif(n)<sample(barp,size=1,replace=TRUE))
X2=unlist(lapply(1:1e6,rsim2))
barplot(table(X1)/1e6,col=rgb(1,0,0,.4))
barplot(table(X2)/1e6,col=rgb(0,0,1,.4),add=TRUE)

Dans le modèle de Feller, on sait que tous les individus ont les probabilités différentes, connues. Ici, ils ont tous la même probabilité, aléatoire (et inconnue).

Ok, mais on fait quoi maintenant…? Interpréter ce résultat n’est pas simple. Le portefeuille sans variabilité est celui pour lequel on a la même probabilité \bar{p} pour tous les individus. A côté, on imagine un portefeuille avec de l’hétérogénité sur les probabilités, autrement dit deux groupes avec deux probabilités différentes, disons p^- et p^+. Mais par hypothèse, il convient que \bar{p} soit la valeur moyenne, autrement dit on a nécessairement p^-<\bar{p}<p^+. Le résultat de Feller nous dit que la variance du portefeuille sera plus faible dans le second cas que dans le premier. A condition d’avoir pu garder tout le monde dans le portefeuille. Pour garder les “low risks”, il a fallu leur offrir une prime plus faible. Et pour garder les “high risk”, il a fallu que les concurents ne proposent pas des prix trop faibles. Autrement dit, l’approche de Feller ne semble marche qu’en segmentant parfaitement les prix, et que tout le monde sur le marché en fasse autant.

Optimal Portfolios #1

This afternoon, I will start a crash course on financial portfolio optimization, with application in R. This week, we start with simple things, with the theoretical setup, without and with a risk free asset. We will discuss then the problem of estimating parameters, in a robust way. Then we introduce the idea of consider a more general criteria to quantify risk than the variance (but it means more general distributions… this point will be discussed further next time). The slides are available here, and R codes from there (in a Markdown)

De la symbolique des grandeurs statistiques

Ce matin, je suis suis un peu emporté sur Twitter, alors que d’ordinaire je préfère éviter…

Il faut dire que j’étais tombé sur un article dans les Echos qui m’a énervé…

J’ai ensuite passé beaucoup de temps à essayer de comprendre pourquoi j’avais été autant énervé par ce titre, cet article… car il n’apprenait rien de bien nouveau… Je pense que c’est vraiment la forme qui m’a agacé, et la symbolique du” temps” y joue (je pense) pour beaucoup. Pourtant, utiliser l’année calendaire comme métaphore pour expliquer des statistiques n’est pas nouveau.

Je pense que la première fois que je ai vu cette idée, je devais avoir dix ans, quand on nous parlait de l’histoire de l’univers, de la terre et l’humanité. Parce que mine de rien, manipuler des grands chiffres n’aide pas à avoir une représentation (ce qui est le cas quand on parle du budget de l’Etat). Oui, le big-bang a eu lieu il y a un peu moins de 15 milliards d’années, Lucy date d’environ 3,2 millions d’années, et le Christ est né il y a 2000 ans. Mais jongler entre les milliers, les millions et les milions est compliqué. Alors dans les cours d’histoire, on utilise une image basée sur le calendrier. Le Big Bang a eu lieu le 1er janvier à zéro heure et nous sommes aujourd’hui le 31 décembre à minuit, autrement dit chaque seconde équivaut à un peu plus de 400 années. Début septembre la terre se forme, les poissons et les premières plantes terrestres arrivent juste avant l’hiver, vers le 20 décembre. Lucy se met debout à 22h30, les Pyramides d’Égypte datent de minuit moins dix secondes, et le Christ naît à minuit moins cinq secondes. Voilà en gros pour l’image. En 90 minutes (sur une année) on a l’histoire de l’humanité, et en 10 secondes, l’histoire telle qu’on l’apprend à l’école. Mais dans cette image, on transforme du temps en un autre temps. Un peu comme dans la métaphone du système solaire, où notre soleil aurait le diamètre d’un ballon de Basket-Ball, la Terre serait à 27 mètres et aurait la taille d’une toute petite bille, de 2.3 millimètres de diamètre), on utilise des distances (et des volumes, qui proposer de représenter des distances par d’autres distances, bien plus petites.

Ici, l’exercice est, je pense, différent: on utilise le temps pour avoir une représentation d’une grandeur monétaire. Certes, un vieil adage prétend que le temps c’est de l’argent, mais je pense que la symbolique est vraiment forte ici. Si les décodeurs du Monde revenaient sur le calcul de cette grandeur (la “date de libération fiscale”) en le questionnant, personnellement c’est la symbolique attachée qui me fait m’interroger.

Les statistiques cachées derrière cette date du 29 juillet comme “date de libération fiscale”, ce sont

  • la dépense publique (ratio entre dépense publique et richesse créée) qui est de l’ordre de 57% en France (mais on ne voit pas vraiement le lien avec la fiscalité ici)
  • le “taux de taxation réel du salarié moyen” utilisé par certains, qui est aussi de l’ordre de 57% en France

Et 57% d’une année, c’est (en gros) entre le 1er janvier et fin juillet, d’où la date mentionnée dans l’article. Symboliquement, que lit-on, avec cette seconde statistique ? Qu’entre le 1er janvier et le 29 juillet on travaille “pour l’état”, pour payer ses impôts, et qu’ensuite, on peut (enfin) travailler “pour soi”. Au début, on travaille pour “les autres” par opposition à la fin où on travaille pour “soi”. Avec cette représentation calendaire, on se dit qu’on perd son temps, probablement. Mais à y regarder de plus près, est-ce que je travaille vraiment pour moi ensuite ? Personnellement, je pense que je passe aussi quelques jours à travailler pour ma banque, pour payer les intérêts du crédit que je rembourse tous les mois pour avoir acheté mon logement il y a quelques temps. Parce que mine de rien, des charges, il y en a beaucoup, il n’y a pas que l’Etat. Pour avoir cotisé pour ma retraite lorsque je travaillais au Canada, puis pour une assurance maladie, j’ai bien vu qu’à l’époque, je travaillais peut-être moins pour payer ces “charges fiscales”, mais je devais travailler plusieurs jours par an (voire semaines) pour payer l’assurance maladie qui couvrait la famille. Bref, le “pour moi” me dérange. Et que dire du “pour les autres” ? La base de la fiscalité, c’est la redistribution, non ? C’est la solidarité… Je ne paye pas des impôts pour les autres, mais “pour nous”, non ?

Bref, cette séparation binaire du temps me paraît fallacieuse et dangereuse car au lieu d’éclairer, elle n’a qu’une visée idéologique…. et c’est dommage.

Les dérives du principe de précaution

« Dans le doute, abstiens-toi » dit la sagesse populaire. Le principe de précaution (en allemand Vorsorgeprinzip) est né de l’idée qu’il convient d’accepter qu’il existe un doute, ou une incertitude (scientifique), dans la connaissance des risques. Il y a un peu plus de 20 ans, la loi Barnier introduisait le principe de précaution dans le droit français, pour le « risque de dommages graves et irréversibles à l’environnement », communément appelés “risque environnemental”. Il y a un peu plus de 10 ans, il a été inscrit dans la Constitution, approuvé par 531 députés, exprimant ainsi un très large consensus politique, probablement aussi social. Mais aujourd’hui, le principe de précaution est évoqué dans des contextes tout aussi divers que le risque d’actes de terrorisme, mais aussi des procédures de droit civil, ou pénal. Quelles sont les conséquences de cette dérive de l’utilisation du principe de précaution ?

Continue reading Les dérives du principe de précaution

Histoire et analyse de la blogosphère économique francophone

Ce billet est cosigné avec Thomas Renault, aka @captaineco_fr

Les blogs économiques prennent aujourd’hui une place centrale dans le débat et l’analyse économique, comme le notait Alex Tabarrok. Mais si la blogosphère économique francophone est encore relativement modeste, elle n’en est pas moins active, et pourrait connaître le même succès que les blogs anglophones dans les années à venir.

La genèse de la blogosphère économie

Au début de l’année 2005, Bernard Salanié – alors professeur à l’Université de Columbia (New York) – publiait le premier billet de son blog « L’économie sans tabou », dans la continuité d’un ouvrage éponyme publié l’année précédente. A la même époque, Alexandre Delaigue et Stéphane Ménia – tous deux anciens étudiants de l’Ecole Normale Supérieure de Cachan et respectivement enseignants en école militaire et au secondaire – lançaient un blog “éconoclaste“, avec une vocation de vulgarisation assez proche de celle de Stephen Dubner et Steven Levitt, les auteurs du best-seller Freakonomics.

Ce mouvement en France faisait écho à celui qui se développait aux Etats-Unis, où des économistes de renom comme Bradford DeLong, (« Grasping Reality »), Tyler Cowen (« Marginal Revolution ») ou encore Gary Becker et Richard Posner (« The Becker-Posner blog ») commençaient à bloguer. Pour reprendre les propos de Gary Becker, prix Nobel d’économie, la blogosphère avait alors le pouvoir de remplacer le marché dans l’optimisation du processus de partage de connaissance entre les individus.

Blogging is a major new social, political, and economic phenomenon. It is a fresh and striking exemplification of Friedrich Hayek’s thesis that knowledge is widely distributed among people and that the challenge to society is to create mechanisms for pooling that knowledge. The powerful mechanism that was the focus of Hayek’s work, as as of economists generally, is the price system (the market). The newest mechanism is the « blogosphere »

Continue reading Histoire et analyse de la blogosphère économique francophone

Données et santé : valeurs, acteurs et enjeux

Ce billet a été rédigé avec Raphaël Suire.

Les données numériques nous concernent tous. En tant qu’usager d’objets connectés ou de services, nous laissons des traces à mesure que nous utilisons, consultons, notifions, commentons des contenus ou des services. D’ailleurs, même quand l’usager de fait rien, l’objet ou la plateforme qui propose du contenu sont en capacité de remonter cette inactivité, ce qui constitue bel et bien une information. En soit, chaque trace prise isolément donne peu d’information sur qui nous sommes. En outre, moins nous sommes routiniers dans l’usage de nos objets ou des services et moins il est simple d’identifier des régularités dans ce que nous faisons. Le caractère routinier peut être difficile à observer. Un horaire d’arrivée au bureau peut sembler complètement aléatoire, et pourtant, la personne prend régulièrement le premier bus qui passe devant chez lui, en sortant à 8 heures très précisément. Un comportement parfaitement déterminé et prévisible, qui se heurte à des obstacles aléatoires peut sembler aléatoire. Il n’en demeure moins que ces données numériques, pour qui sait les stocker et les croiser, constituent de l’or noir et un carburant d’une performance renouvelée pour de nombreuses organisations. Symétriquement, cette exploitation, constitue une source de tension entre des usagers sensibles à la protection de leur donnée personnelle (notion de « privacy concern ») et ces mêmes acteurs. Elle interroge également en profondeur le régulateur, car les réels gagnants seront peu nombreux. Il faut entendre ici, ceux qui au final vont posséder et capturer la valeur des données agrégées. Et enfin, c’est sans compter avec un nouveau terrain de jeu, aux promesses stratosphériques, celui de la santé connectée et un marché mondial estimé à 308 milliards de dollars (Grand View Research, 2015).

Continue reading Données et santé : valeurs, acteurs et enjeux

Où ont été observées les fortes hausses du prix de l’essence ?

Vendredi, dans mon billet “le prix de l’essence en cas de pénurie“, j’essayais de visualiser l’explosion du prix de l’essence dans certaines stations services, ces dernières semaines. Mais plus généralement, on peut se demander où ont été observées les fortes hausses.

A partir de la base téléchargeable sur http://www.prix-carburants.economie.gouv.fr/, il est possible de récupérer les informations les prix, par jour, dans toutes les stations essence de France. Pour ça, on utilise la fonction suivante,

> spatial=function(dt){
+  base=NULL
+  for(no in 1:length(l)){  
+    prix=list()
+    date=list()
+    j=0
+    for(i in 1:length(l[[no]])){
+    v=names(l[[no]])
+    if(!is.null(v[i])){
+    if(v[i]=="prix"){
+    j=j+1
+    date[[j]]=as.character(l[[no]][[i]]["maj"])
+    }}
+  }
+ n=j
+ D=as.Date(substr(unlist(date),1,10),"%Y-%m-%d")
+ k=which(D==D[which.max(D[D<=dt])])
+ if(length(k)>0){
+ B=Vectorize(function(i) l[[no]][[k[i]]])
+   (1:length(k))
+ if("nom" %in%  rownames(B)){  
+   k=which(B["nom",]=="Gazole")
+   prix=as.numeric(B["valeur",k])/1000
+   if(length(prix)==0) prix=NA
+   base1=data.frame(indice=no,
+ lat=as.numeric(l[[no]]$.attrs["latitude"])
+    /100000,
+ lon=as.numeric(l[[no]]$.attrs["longitude"])
+    /100000,
+ gaz=prix,
+ cp=l[[no]]$.attrs["cp"])
+ base=rbind(base,base1)
+ }}}
+ return(base)}

Par exemple, on peut récupérer les prix fin mai

> B1=spatial(as.Date("2016-05-31"))

et début mai,

> B2=spatial(as.Date("2016-05-01"))

En fusionnant ces deux bases, on peut récupérer la variation du prix, sur un mois,

> names(B1)=c("indice","lat","long","fin","cp")
> names(B2)=c("indice","lat","long","debut","cp")
> B=merge(B1,B2)
> B$var=(B$fin-B$debut)/B$debut*100

Plus particulièrement, on va s’intéresser aux stations dans lesquelles le prix a augmenté de plus de 10%,

> idx=which((B$lon>(-10))&(B$lon<20)&
+             (B$lat>35)&(B$lat<55))
> B=B[idx,]
> B=B[!is.na(B$var),]
> B$Y=(B$var>10)*1

Notons déjà que dans plus de 90% des stations, le prix a augmenté entre début mai, et fin mai

> mean(B$var > 0)
[1] 0.9288522

mais les hausses de plus de 10% restent rares (moins de 5% des stations essence)

> mean(B$var > 10)
[1] 0.03305452

La distribution de la variation des prix (en %) est la suivante

> plot(density(B$var),xlim=c(-5,20))

Si on regarde maintenant où ont été observées ces fortes hausses, on observe qu’elles peuvent avoir lieu n’importe où

> library(maps)
> map("france")
> points(B$lon,B$lat,pch=19,col=c(rgb(0,0,1,.25),
+ rgb(1,0,0,.95))[1+(B$var>10*1)],cex=.5)

et c’est encore plus flagrant quand on regarde les hausses de plus de 5%

> points(B$lon,B$lat,pch=19,col=c(rgb(0,0,1,.35),
+ rgb(1,0,0,.95))[1+(B$var>5*1)],cex=.5)

Et si on regarde les endroits où la proportion des stations ayant fortement augmenté les prix est la plus forte, on obtient la carte suivante,

Autrement dit la hausse des prix a pu être observée un peu partout en France. On peut alors comparer avec la carte des pénuries, mise en ligne sur https://mon-essence.fr/

Si certaines régions avec des fortes hausses de prix correspondent à des pénuries d’essence, ce n’est pas le cas partout. Bref, comprendre ce qui s’est passé sur les prix ce mois de mai est loin d’être simple… à suivre donc….

Le prix de l’essence en cas de pénurie

Cet après midi, sur le site de 60 millions de consommateurs, Lionel Maugain revenait sur les stations services qui avaient de très fortes variations de prix pendant les jours de pénurie.

En utilisant la base des prix des carburants à la pompe, téléchargeable sur http://www.prix-carburants.economie.gouv.fr/, il est possible de visualiser ces fortes variations de prix. Pour lire la base, rien de plus simple

> rm(list=ls())
> annee=year=2016
> fichier="PrixCarburants_annuel_2016.xml"
> library(plyr)
> library(XML)
> library(lubridate)
> l=xmlToList(fichier)

Ensuite, on transforme la grosse liste en une série temporelle, à l’aide de la fonction suivante

> time_series=function(no,type_gas="Gazole"){
+    prix=list()
+    date=list()
+    nom=list()
+    j=0
+    for(i in 1:length(l[[no]])){
+      v=names(l[[no]])
+      if(!is.null(v[i])){
+      if(v[i]=="prix"){
+      j=j+1
+      date[[j]]=as.character(l[[no]][[i]]["maj"])
+      prix[[j]]=as.character(l[[no]][[i]]["valeur"])
+      nom[[j]]=as.character(l[[no]][[i]]["nom"])
+     }}
+    }
+    id=which(unlist(nom)==type_gas)
+    n=length(id)
+    jour=function(j) as.Date(substr(date[[id[j]]],1,10),"%Y-%m-%d")
+    jour_heure=function(j) as.POSIXct(substr(date[[id[j]]],1,19), format = "%Y-%m-%d %H:%M:%S", tz = "UTC")
+    ext_y=function(j) substr(date[[id[j]]],1,4)
+    ext_m=function(j) substr(date[[id[j]]],6,7)
+    ext_d=function(j) substr(date[[id[j]]],9,10)
+    ext_h=function(j) substr(date[[id[j]]],12,13)
+    ext_mn=function(j) substr(date[[id[j]]],15,16)
+    prix_essence=function(i) as.numeric(prix[[id[i]]])/1000
+    base1=data.frame(indice=no,
+   id=l[[no]]$.attrs["id"],
+   adresse=l[[no]]$adresse,
+   ville=l[[no]]$ville,
+   lat=as.numeric(l[[no]]$.attrs["latitude"])
    /100000,
+   lon=as.numeric(l[[no]]$.attrs["longitude"])
    /100000,
+   cp=l[[no]]$.attrs["cp"],
+   saufjour=l[[no]]$ouverture["saufjour"], 
+   Y=unlist(lapply(1:n,ext_y)),
+   M=unlist(lapply(1:n,ext_m)),
+   D=unlist(lapply(1:n,ext_d)),
+   H=unlist(lapply(1:n,ext_h)),
+   MN=unlist(lapply(1:n,ext_mn)),
+   prix=unlist(lapply(1:n,prix_essence)))
+   base1=base1[!is.na(base1$prix),]
+   date_d=paste(year,"-01-01 12:00:00",sep="")
+   date_f=paste(year,"-06-01 12:00:00",sep="")
+   vecteur_date=seq(as.POSIXct(date_d, format =
+   "%Y-%m-%d %H:%M:%S"),
+   as.POSIXct(date_f, format = 
+   "%Y-%m-%d %H:%M:%S"),by="days")
+   date=paste(base1$Y,"-",base1$M,"-",base1$D,
+   " ",base1$H,":",base1$MN,":00",sep="")
+   date_base=as.POSIXct(date, format = 
+   "%Y-%m-%d %H:%M:%S", tz = "UTC")
+   idx=function(t) sum(vecteur_date[t]>=date_base)
+      vect_idx=Vectorize(idx)(1:length(vecteur_date))
+      P=c(NA,base1$prix)
+      vp=P[1+vect_idx]
+      vp=vp*100/(vp[!is.na(vp)][1])
+      base2=ts(vp,start=year,frequency=365)
+      list(base=base1,ts=base2)
+    }

Dans l’article, la plus forte variation a été observée à Sevran. Pour récupérer la liste des stations services, on utilise

> ville=list()
> for(no in 1:length(l)) 
      ville[[no]]=l[[no]]$ville
> ville=unlist(ville)
> idx=which(ville=="Sevran")

Pour la première station, l’évolution du prix est relativement modérée

> plot(time_series(idx[1])$ts,col="red",type="l")
> abline(h=100,lty=2)

Même si le prix semble avoir augmenté de 5% pendant les dernières semaines. Mais ce n’est effectivement rien en comparaison de l’autre station,

> plot(time_series(idx[2])$ts,col="red",type="l")
> abline(h=100,lty=2)

avec un prix qui a augmenté de 50% en 2 mois, et presque 30% en quelques jours.

A Villeneuve-d’Asq, on retrouve un comportement similaire, mais moins important

> idx=which(ville=="Villeneuve-d'Ascq")
> plot(time_series(idx[3])$ts,col="red",type="l")
> abline(h=100,lty=2)

Picking an asset to invest

Yesterday, Andrew Lo spent some time on a nice graph, discussing attitudes towards risk. Here are four assets (thanks  for improving the terminology), real data (no information here about time, but it’s the same scale for the four of them)

The question raised was quite simple

if you could invest in one, and only one, asset which one will you pick ?

Continue reading Picking an asset to invest