All posts by Arthur Charpentier

Arthur Charpentier, professor in Montréal, in Actuarial Science. Former professor-assistant at ENSAE Paristech, associate professor at Ecole Polytechnique and assistant professor in Economics at Université de Rennes 1.  Graduated from ENSAE, Master in Mathematical Economics (Paris Dauphine), PhD in Mathematics (KU Leuven), and Fellow of the French Institute of Actuaries.

L’IA pour prédire les émeutes ?

Il y a quelques semaines, j’avais été contacte par un journaliste qui voulait me poser des questions, suite a notre article Tents, Tweets, and Events: The Interplay Between Ongoing Protests and Social Media. Ça a été l’occasion de me replonger dedans… et de voir ce qui a été écrit depuis… Et ce soir, je découvre un peu par hasard que l’article est paru, dans le numéro de Février de Science & Vie…

Probabilistic Foundations of Econometrics, part 3

This post is the third one of our series on the history and foundations of econometric and machine learning models. Part 2 is online here.

Exponential family and linear models

The Gaussian linear model is a special case of a large family of linear models, obtained when the conditional distribution of Y (given the covariates) belongs to the exponential family f(y_i|\theta_i,\phi)=\exp\left(\frac{y_i\theta_i-b(\theta_i)}{a(\phi)}+c(y_i,\phi)\right) with \theta_i=\psi(\mathbf{x}_i^T \beta). Functions a, b and c are specified according to the type of exponential law (studied extensively in statistics since Darmoix (1935), as Brown (1986) reminds us), and \psi is a one-to-one mapping that the user must specify. Log-likelihood then has a simple expression \log\mathcal{L}(\mathbf{\theta},\phi|\mathbf{y}) =\frac{\sum_{i=1}^ny_i\theta_i-\sum_{i=1}^nb(\theta_i)}{a(\phi)}+\sum_{i=1}^n c(y_i,\phi) and the first order condition is then written \frac{\partial \log \mathcal{L}(\mathbf{\theta},\phi|\mathbf{y})}{\partial \mathbf{\beta}} = \mathbf{X}^T\mathbf{W}^{-1}[\mathbf{y}-\widehat{\mathbf{y}}]=\mathbf{0} based on Müller’s (2011) notations, where \mathbf{W} is a weight matrix (which depends on \beta). Given the link between \theta and the expectation of Y, instead of specifying the function \psi(\cdot) , we will tend to specify the link function g(\cdot) defined by \widehat{y}=m(\mathbf{x})=\mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=g^{-1} (\mathbf{x}^T \beta) For the Gaussian linear regression we consider an identity link, while for the Poisson regression, the natural link (called canonical) is the logarithmic link. Here, as \mathbf{W} depends on \beta (with \mathbf{W}=diag(\nabla g(\widehat{\mathbf{y}})Var[\mathbf{y}]) there is generally no explicit formula for the maximum likelihood estimator. But an iterative algorithm makes it possible to obtain a numerical approximation. By setting \mathbf{z}=g(\widehat{\mathbf{y}})+(\mathbf{y}-\widehat{\mathbf{y}})\cdot\nabla g(\widehat{\mathbf{y}}) corresponding to the error term of a Taylor development in order 1 of g, we obtain an algorithm of the form\widehat{\beta}_{k+1}=[\mathbf{X}^T \mathbf{W}_k^{-1} \mathbf{X}]^{-1} \mathbf{X}^T \mathbf{W}_k^{-1} \mathbf{z}_kBy iterating, we will define \widehat{\beta}=\widehat{\beta}_{\infty}, and we can show that – with some additional technical assumptions (detailed in Müller (2011)) – this estimator is asymptotically Gaussian, with \sqrt{n}(\widehat{\beta} -\beta)\overset{\mathcal{L}}{\rightarrow} \mathcal{N}(\mathbf{0},I(β)^{-1}) where numerically I(\beta)=\varphi\cdot[\mathbf{X}^T \mathbf{W}_\infty^{-1} \mathbf{X}] .

From a numerical point of view, the computer will solve the first-order condition, and actually, the law of Y does not really intervene. For example, one can estimate a “Poisson regression” even when observations are not integers (but they need to be positive). In other words, the law of Y is only an interpretation here, and the algorithm could be introduced in a different way (as we will see later on), without necessarily having an underlying probabilistic model.

Logistic Regression

Logistic regression is the generalized linear model obtained with a Bernoulli’s law, and a link function which is the quantile function of a logistic law (which corresponds to the canonical link in the sense of the exponential family). Taking into account the form of Bernoulli’s law, econometrics proposes a model for y_i\in\{0,1\}, in which the logarithm of the odds follows a linear model: \log\left(\frac{\mathbb{P}[Y=1\vert \mathbf{X}=\mathbf{x}]}{\mathbb{P}[Y\neq 1\vert \mathbf{X}=\mathbf{x}]}\right)=\beta_0+\mathbf{x}^T\beta or \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=\frac{e^{\beta_0+\mathbf{x}^T\beta}}{1+ e^{\beta_0+\mathbf{x}^T\beta}}=H(\beta_0+\mathbf{x}^T\beta) where H(\cdot)=\exp(\cdot)/(1+exp(\cdot)) is the cumulative distribution function of the logistic law. The estimation of (\beta_0,\beta) is performed by maximizing the likelihood: \mathcal{L}=\prod_{i=1}^n \left(\frac{e^{\mathbf{x}_i^T\mathbf{\beta}}}{1+e^{\boldsymbol{x}_i^T\mathbf{\beta}}}\right)^{y_i}\left(\frac{1}{1+e^{\mathbf{x}_i^T\mathbf{\beta}}}\right)^{1-y_i} It is said to be a linear models because isoprobability curves here are the parallel hyperplanes b+\mathbf{x}^T\beta . Rather than this model, popularized by Berkson (1944), some will prefer the probit model (see Berkson, 1951), introduced by Bliss (1934). In this model: \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=\Phi (\beta_0+\mathbf{x}^T\beta)

where \Phi denotes the distribution function of the reduced centred normal distribution. This model has the advantage of having a direct link with the Gaussian linear model, since y_i=\mathbf{1}(y_i^\star>0) with y_i^\star=\beta_0+\mathbf{x}^T \beta+\varepsilon_i where the residuals are Gaussian, \mathcal{N}(0,\sigma^2). An alternative is to have centered residuals of unit variance, and to consider a latent modeling of the form y_i=\mathbf{1}(y_i^\star>\xi) (where \xi will be fixed). As we can see, these techniques are fundamentally linked to an underlying stochastic model. In the body of the article, we present several alternative techniques – from the learning literature – for this classification problem (with two classes, here 0 and 1).

Regression in high dimension

As we mentioned earlier, the first order condition \mathbf{X}^T (\mathbf{X}\widehat{\beta}-\mathbf{y})=\mathbf{0} is solved numerically by performing a QR decomposition, at a cost which consists in O(np^2) operations (where p is the rank of \mathbf{X}^T \mathbf{X}). Numerically, this calculation can be long (either because p is large or because n is large), and a simpler strategy may be to sub-sample. Let n_s\ll n, and consider a sub-sample size n_s of \{1,\cdots,n\}. Then \widehat{\beta}_s=(\mathbf{X}_s^T \mathbf{X}_s )^{-1} \mathbf{X}_s^T\mathbf{y}_s is a good approximation of \beta as shown by Dhillon et al. (2014). However, this algorithm is dangerous if some points have a high leverage (i.e. L_i=\mathbf{x}_i(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{x}_i^T). Tropp (2011) proposes to transform the data (in a linear way), but a more popular approach is to do non-uniform sub-sampling, with a probability related to the influence of observations (defined by I_i=\widehat{\varepsilon}_iL_i/(1-L_i)^2 , and which unfortunately can only be calculated once the model is estimated).

In general, we will talk about massive data when the data table of size does not fit in the RAM memory of the computer. This situation is often encountered in statistical learning nowadays with very often p\ll n. This is why, in practice, many libraries of algorithms assimilated to machine learning use iterative methods to solve the first-order condition. When the parametric model to be calibrated is indeed convex and semi-differentiable, it is possible to use, for example, the stochastic gradient descent method as suggested by Bottou (2010). This last one allows to free oneself at each iteration from the calculation of the gradient on each observation of our learning base. Rather than making an average descent at each iteration, we start by drawing (without replacement) an observation \mathbf{x}_i among the n available. The model parameters are then corrected so that the prediction made from \mathbf{x}_i is as close as possible to the true value y_i. The method is then repeated until all the data have been reviewed. In this algorithm there is therefore as much iteration as there are observations. Unlike the gradient descent algorithm (or Newton’s method) at each iteration, only one gradient vector is calculated (and no longer n). However, it is sometimes necessary to run this algorithm several times to increase the convergence of the model parameters. If the objective is, for example, to minimize a loss function \ell between the estimator m_\beta (\mathbf{x}) and y (like the quadratic loss function, as in the Gaussian linear regression) the algorithm can be summarized as follows:

  • Step 0: Mix the data
  • Iteration step: For t=1,\cdots, n, we pull i\in\{1,\cdots,n\} without replacement, and we set \beta^{t+1} = \beta^{t} - \gamma_t\frac{ \partial{\ell(y_i,m_{\beta^t}(X_i)) } }{ \partial{ \beta}}

This algorithm can be repeated several times as a whole depending on the user’s needs. The advantage of this method is that at each iteration, it is not necessary to calculate the gradient on all observations (more sum). It is therefore suitable for large databases. This algorithm is based on a convergence in probability towards a neighborhood of the optimum (and not the optimum itself).

(references will be given in the very last post of that series) To be continued

Probabilistic Foundations of Econometrics, part 2

This post is the second one of our series on the history and foundations of econometric and machine learning models. Part 1 is online here.

Geometric Properties of this Linear Model

Let’s define the scalar product in \mathbb{R}^n, ⟨\mathbf{a},\mathbf{b}⟩=\mathbf{a}^T\mathbf{b}, and let’s note \|\cdot\| the associated Euclidean standard, \|\mathbf{a}\|=\sqrt{\mathbf{a}^T\mathbf{a}} (denoted \|\cdot\|_{\ell_2} in the next post). Note \mathcal{E}_X the space generated by all linear combinations of the \mathbf{X} components (adding the constant). If the explanatory variables are linearly independent, \mathbf{X} is a full (column) rank matrix and \mathcal{E}_X is a space of dimension p+1. Let’s assume from now on that the variables \mathbf{x}  and y are centered here. Note that no law hypothesis is made in this section, the geometric properties are derived from the properties of expectation and variance in the set of finite variance variables.

With this notation, it should be noted that the linear model is written m(\mathbf{x})=⟨\mathbf{x},\beta⟩. The space H_z=\{\mathbf{x}\in\mathbb{R}^{p+1}:m(\mathbf{x})=z\} is a hyperplane (affine) that separates the space in two. Let’s define the orthogonal projection operator on \mathcal{E}_X, \Pi_X =\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T. Thus, the forecast that can be made for it is: \widehat{\mathbf{y}}=\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T\mathbf{y}=\Pi_X\mathbf{y}. As, \widehat{\varepsilon}=\mathbf{y}-\widehat{\mathbf{y}}=(\mathbb{I}-\Pi_X)\mathbf{y}=\Pi_{X^\perp}\mathbf{y}, we note that \widehat{\varepsilon}\perp\mathbf{x}, which will be interpreted as meaning that residuals are a term of innovation, unpredictable in the sense that \Pi_{X }\widehat{\varepsilon}=\mathbf{0}. The Pythagorean theorem is written here: \Vert \mathbf{y} \Vert^2=\Vert \Pi_{ {X}}\mathbf{y} \Vert^2+\Vert \Pi_{ {X}^\perp}\mathbf{y} \Vert^2=\Vert \Pi_{ {X}}\mathbf{y}\Vert^2+\Vert \mathbf{y}-\Pi_{ {X}}\mathbf{y}\Vert^2=\Vert\widehat{\mathbf{y}}\Vert^2+\Vert\widehat{\mathbf{\varepsilon}}\Vert^2which is classically translated in terms of the sum of squares: \underbrace{\sum_{i=1}^n y_i^2}_{n\times\text{total variance}}=\underbrace{\sum_{i=1}^n \widehat{y}_i^2}_{n\times\text{explained variance}}+\underbrace{\sum_{i=1}^n (y_i-\widehat{y}_i)^2}_{n\times\text{residual variance}} The coefficient of determination, R^2, is then interpreted as the square of the cosine of the angle \theta between \mathbf{y} and \Pi_X \mathbf{y} : R^2=\frac{\Vert \Pi_{{X}} \mathbf{y}\Vert^2}{\Vert \mathbf{y}\Vert^2}=1-\frac{\Vert \Pi_{ {X}^\perp} \mathbf{y}\Vert^2}{\Vert \mathbf {y}\Vert^2}=\cos^2(\theta)An important application was obtained by Frish & Waugh (1933), when the explanatory variables are divided into two groups, \mathbf{X}=[\mathbf{X}_1 |\mathbf{X}_2], so that the regression becomes y=\beta_0+\mathbf{X}_1 β_1+\mathbf{X}_2 β_2+\varepsilon. Frish & Waugh (1933) showed that two successive projections could be considered. Indeed, if \mathbf{y}_2^\star=\Pi_{X_1^\perp} \mathbf{y} and X_2^\star=\Pi_{X_1^\perp}\mathbf{X}_2, we can show that \widehat{\beta} _2=[{\mathbf{X}_2^\star}^T \mathbf{X}_2^\star]^{-1}{\mathbf{X}_2^\star}^T \mathbf{y}_2^\star In other words, the overall estimate is equivalent to the combination of independent estimates of the two models if \mathbf{X}_2^\star=\mathbf{X}_2, i.e. \mathbf{X}_2\in \mathcal{E}_{X_1}^\perp, which can be noted \mathbf{x}_1\perp\mathbf{x}_2 We obtain here the Frisch-Waugh theorem which guarantees that if the explanatory variables between the two groups are orthogonal, then the overall estimate is equivalent to two independent regressions, on each of the sets of explanatory variables. This is a theorem of double projection, on orthogonal spaces. Many results and interpretations are obtained through geometric interpretations (fundamentally related to the links between conditional expectation and the orthogonal projection in space of variables of finite variance).

This geometric interpretation might help to get a better understanding of the problem of under-identification, i.e. the case where the real model would be y_i=\beta_0+ \mathbf{x}_1^T \beta_1+\mathbf{x}_2^T \beta_2+\varepsilon_i, but the estimated model is y_i=b_0+\mathbf{x}_1^T \mathbf{b}_1+\eta_i. The maximum likelihood estimator of \mathbf{b}_1 is \widehat{\mathbf{b}}_1=\mathbf {\beta}_1 + \underbrace{ (\mathbf {X}_1^T\mathbf {X}_1)^{-1} \mathbf {X}_1^T \mathbf {X}_{2} \mathbf{\beta}_2}_{\mathbf{\beta}_{12}}+\underbrace{(\mathbf{X}_1^{T}\mathbf{X}_1)^{-1} \mathbf{X}_1^T\varepsilon}_{\nu}so that \mathbb{E}[\widehat{\mathbf{b}}_1]=\beta_1+\beta_{12}, the bias ( \beta_{12}) being null only in the case where \mathbf{X}_1^T \mathbf{X}_2=\mathbf{0} (i. e. \mathbf{X}_1\perp \mathbf{X}_2 ): we find here a consequence of the Frisch-Waugh theorem.

On the other hand, over-identification corresponds to the case where the real model would be y_i=\beta_0+\mathbf{x}_1^T \beta_1+\varepsilon_i, but the estimated model is y_i=b_0+ \mathbf{x}_1^T \mathbf{b} _1+\mathbf{x}_2^T \mathbf{b}_2+\eta_i. In this case, the estimate is unbiased, in the sense that \mathbb{E}[\widehat{\mathbf{b}}_1]=\beta_1 but the estimator is not efficient. Later on, we will discuss an effective method for selecting variables (and avoid over-identification).

From parametric to non-parametric

We can rewrite equation (4) in the form \widehat{\mathbf{y}}=\Pi_X\mathbf{y} which helps us to see the forecast directly as a linear transformation of the observations. More generally, a linear predictor can be obtained by considering m(\mathbf{x})=\mathbf{s}_{\mathbf{x}}^T \mathbf{y}, where \mathbf{s}_{\mathbf{x}} is a weight vector, which depends on \mathbf{x}, interpreted as a smoothing vector. Using the vectors \mathbf{s}_{\mathbf{x}_i}, calculated from the observations \mathbf{x}_i, we obtain a matrix \mathbf{S} of size n\times n, and \widehat{\mathbf{y}}=\mathbf{S}\mathbf{y}. In the case of the linear regression described above, \mathbf{s}_{\mathbf{x}}=\mathbf{X}[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{x}, and in that case \text{trace}(\mathbf{S}) is the number of columns in the \mathbf{X} matrix (the number of explanatory variables). In this context of more general linear predictors, \text{trace}(\mathbf{S}) is often seen as equivalent to the number of parameters (or complexity, or dimension, of the model), and \nu=n-\text{trace}(\mathbf{S}) is then the number of degrees of freedom (see Ruppert et al., 2003; Simonoff, 1996). The principle of parsimony says that we should minimize this dimension (the trace of the matrix \mathbf{S}) as much as possible. But in the general case, this dimension is more to obtain, explicitely.

The estimator introduced by Nadaraya (1964) and Watson (1964), in the case of a simple non-parametric regression, is also written in this form since\widehat{m}_h(x)=\mathbf{s}_{x}^T\mathbf{y}=\sum_{i=1}^n \mathbf{s}_{x,i}y_iwhere\mathbf{s}_{x,i}=\frac{K_h(x-x_i)}{K_h(x-x_1)+\cdots+K_h(x-x_n)} where K(\cdot) is a kernel function, which assigns a value that is lower the closer x_i is to x, and h>0 is the bandwidth. The introduction of this metaparameter h is an important issue, as it should be chosen wisely. Using asymptotic developments, we can show that if X has density f, \text{biais}[\widehat{m}_h(x)]=\mathbb{E}[\widehat{m}_h(x)]-m(x)\sim {h^2}\left(\frac{C_1 }{2}m''(x)+C_2 m'(x)\frac{f'(x)}{f(x)}\right)and \displaystyle{{\text{Var}[\widehat{m}_h(x)]\sim\frac{C_3}{{nh}}\frac{\sigma(x)}{f(x)}}}for some constants that can be estimated (see Simonoff (1996) for a discussion). These two functions evolve inversely with h, as shown in Figure 1 (where the metaparameter on the x-axis is here, actually, h^{-1}). Keep in ming that we will see a similar graph in the context of machine learning models.

Figure 1. Choice of meta-parameter and the Goldilocks problem: it must not be too large (otherwise there is too much variance), nor too small (otherwise there is too much bias).

The natural idea is then to try to minimize the mean square error, the MSE, defined as bias[\widehat{m}_h (x)]^2+Var[\widehat{m}_h (x)], and them integrate over x, which gives an optimal value for h of the form h^\star=O(n^{-1/5}) , and reminds us of Silverman’s rule – see Silverman (1986). In larger dimensions, for continuous \mathbf{x} variables, a multivariate kernel with matrix bandwidth \mathbf{H} can be used, and \mathbb{E}[\widehat{m}_{\mathbf{H}}(\mathbf{x})]\sim m(\mathbf{x})+\frac{C_1}{2}\text{trace}\big(\mathbf{H}^Tm''(\mathbf{x})\mathbf{H}\big)+C_2\frac{m'(\boldsymbol{x})^T\mathbf{H}\mathbf{H}^T \nabla f(\mathbf{x})}{f(\mathbf{x})}while\text{Var}[\widehat{m}_{\mathbf{H}}(\mathbf{x})]\sim\frac{C_3}{n~\text{det}(\mathbf{H})}\frac{\sigma(\mathbf{x})}{f(\mathbf{x})}
If \mathbf{H} is a diagonal matrix, with the same term h  on the diagonal, then h^\star=O(n^{-1/(4+dim(\mathbf{x}))}. However, in practice, there will be more interest in the integrated version of the quadratic error, MISE(\widehat{m}_{h})=\mathbb{E}[MSE(\widehat{m}_{h}(X))]=\int MSE(\widehat{m}_{h}(x))dF(x)and we can prove that MISE[\widehat{m}_h]\sim \overbrace{\frac{h^4}{4}\left(\int x^2k(x)dx\right)^2\int\big[m''(x)+2m'(x)\frac{f'(x)}{f(x)}\big]^2dx}^{\text{bias}^2} +\overbrace{\frac{\sigma^2}{nh}\int k^2(x)dx \cdot\int\frac{dx}{f(x)}}^{\text{variance}}as n→∞ and nh→∞. Here we find an asymptotic relationship that again recalls Silverman’s (1986) order of magnitude, h^\star =n^{-\frac{1}{5}}\left(\frac{C_1\int \frac{dx}{f(x)}}{C_2\int \big[m''(x)+2m'(x)\frac{f'(x)}{f(x)}\big]dx}\right)^{\frac{1}{5}}The main problem here, in practice, is that many of the terms in the expression above are unknown. Automatic learning offers computational techniques, when the econometrician used to searching for asymptotic (mathematical) properties.

To be continued (references mentioned above are online here)…

Probabilistic Foundations of Econometrics, part 1

In a series of posts, I wanted to get into details of the history and foundations of econometric and machine learning models. It will be some sort of online version of our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics. This is the first one…

The importance of probabilistic models in economics is rooted in Working’s (1927) questions and the attempts to answer them in Tinbergen’s two volumes (1939). The latter have subsequently generated a great deal of work, as recalled by Duo (1993) in his book on the foundations of econometrics, and more particularly in the first chapter “The Probability Foundations of Econometrics”. It should be recalled that Trygve Haavelmo was awarded the Nobel Prize in Economics in 1989 for his “clarification of the foundations of the probabilistic theory of econometrics”. Because as Haavelmo (1944) (initiating a profound change in econometric theory in the 1930s, as recalled in Morgan’s Chapter 8 (1990)) showed, econometrics is fundamentally based on a probabilistic model, for two main reasons. First, the use of statistical quantities (or “measures”) such as means, standard errors and correlation coefficients for inferential purposes can only be justified if the process generating the data can be expressed in terms of a probabilistic model. Second, the probability approach is relatively general, and is particularly well suited to the analysis of “dependent” and “non-homogeneous” observations, as they are often found on economic data.We will then assume that there is a probabilistic space (\Omega,\mathcal{F},\mathbb{P}) such that observations (y_i,\mathbf{x}_i) are seen as realizations of random variables (Y_i, \mathbf{X}_i) . In practice, however, we are not very interested in the joint law of the couple (Y, \mathbf{X}) : the law of \mathbf{X} is unknown, and it is the law of Y conditional on \mathbf{X} that will be interested in. In the following, we will note x a single observation, \mathbf{x} a vector of observations, X a random variable, and \mathbf{X} a random vector. Abusively, \mathbf{X} may also designate the matrix of individual observations (denoted \mathbf{x}_i), depending on the context.

Foundations of mathematical statistics

As recalled in Vapnik’s (1998) introduction, inference in parametric statistics is based on the following belief: the statistician knows the problem to be analyzed well, in particular, he knows the physical law that generates the stochastic properties of the data, and the function to be found is written via a finite number of parameters[1]. To find these parameters, the maximum likelihood method is used. The purpose of the theory is to justify this approach (by discovering and describing its favorable properties). We will see that in learning, philosophy is very different, since we do not have a priori reliable information on the statistical law underlying the problem, nor even on the function we would like to approach (we will then propose methods to construct an approximation from the data at our disposal, as in (1998)). A “golden age” of parametric inference, from 1930 to 1960, laid the foundations for mathematical statistics, which can be found in all statistical textbooks, including today. As Vapnik (1998) states, the classical parametric paradigm is based on the following three beliefs:

  1. To find a functional relationship from the data, the statistician is able to define a set of functions, linear in their parameters, that contain a good approximation of the desired function. The number of parameters describing this set is small.
  2. The statistical law underlying the stochastic component of most real-life problems is the normal law. This belief has been supported by reference to the central limit theorem, which stipulates that under large conditions the sum of a large number of random variables is approximated by the normal law.
  3. The maximum likelihood method is a good tool for estimating parameters.

In this section we will come back to the construction of the econometric paradigm, directly inspired by that of classical inferential statistics.

Conditional laws and likelihood

Linear econometrics has been constructed under the assumption of individual data, which amounts to assuming independent variables (Y_i, \mathbf{X}_i) (if it is possible to imagine temporal observations – then we would have a process (Y_t, \mathbf{X}_t) – but we will not discuss time series here). More precisely, we will assume that, conditionally to the explanatory variables \mathbf{X}_i, the variables Y_i are independent. We will also assume that these conditional laws remain in the same parametric family, but that the parameter is a function of \mathbf{x}. In the Gaussian linear model it is assumed that: (Y\vert \mathbf{X}=\mathbf{x})\overset{\mathcal{L}}{\sim}\mathcal{N}(\mu(\mathbf{x}),\sigma^2)~~~~ (1)where \mu(\mathbf{x})=\beta_0+\mathbf{x}^T\mathbf{\beta} and \mathbf{\beta}\in\mathbb{R}^{p}.

It is usually called a ‘linear’ model since \mathbb{E}[Y\vert \mathbf{X}=\mathbf{x}]=\beta_0+\mathbf{x}^T\mathbf{\beta} is a linear combination of covariates[2]. It is said to be a homoscedastic model if Var[Y|\mathbf{X}=\mathbf{x}]=\sigma^2, where \sigma^2 is a positive constant. To estimate the parameters, the traditional approach is to use the Maximum Likelihood estimator, as initially suggested by Ronald Fisher. In the case of the Gaussian linear model, log-likelihood is written:  \log\mathcal{L}(\beta_0, \mathbf{\beta},\sigma^2\vert \mathbf{y},\mathbf{x}) = -\frac{n}{2}\log[2\pi\sigma^2] - \frac{1}{2\sigma^2}\sum_{i=1}^n (y_i-\beta_0-\mathbf{x}_i^T\mathbf{\beta})^2Note that the term on the right, measuring a distance between the data and the model, will be interpreted as deviance in generalized linear models. Then we will set: (\widehat{\beta}_0,\widehat{\mathbf{\beta}},\widehat{\sigma}^2)=\text{argmax}\left\lbrace\log\mathcal{L}(\beta_0, \mathbf{\beta},\sigma^2\vert \mathbf{y},\mathbf{x})\right\rbraceThe maximum likelihood estimator is obtained by minimizing the sum of the error squares (the so-called “least squares” estimator) that we will find in the “machine learning” approach.

The first order conditions allow to find the normal equations, whose matrix writing is \mathbf{X}^T[\mathbf{y}-\mathbf{X}\mathbf{\beta}]=\mathbf{0}, which can also be written (\mathbf{X}^T \mathbf{X})\mathbf{\beta}=\mathbf{X}^T \mathbf{y}. If \mathbf{X} is a full (column) rank matrix, then we find the classical estimator:\widehat{\mathbf{\beta}}=(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}=\mathbf{\beta}+(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^{-1}\mathbf{\varepsilon}~~~(2)using residual-based writing (as often in econometrics), y=\mathbf{x}^T\mathbf{\beta}+\varepsilon. Gauss Markov’s theorem ensures that this estimator is the unbiased linear estimator with minimum variance. It can then be shown that \widehat{\mathbf{\beta}}\sim\mathcal{N}(\mathbf{\beta},\sigma^2(\mathbf{X}^T\mathbf{X})^{-1}), and in particular, if we simply need the first two moments : \mathbb{E}[\widehat{\mathbf{\beta}}]=\mathbf{\beta}~~~Var[\widehat{\mathbf{\beta}}]=\sigma^2 [\mathbf{X}^T\mathbf{X}]^{-1}In fact, the normality hypothesis makes it possible to make a link with mathematical statistics, but it is possible to construct this estimator given by equation (2) without that Gaussian assumption. Hence, if we assume that Y|\mathbf{X} has the same distribution as \mathbf{x}^T\mathbf{\beta}+\varepsilon, where \mathbb{E}[\varepsilon]=0, Var[\varepsilon]=\sigma^2 and Cov[X_j,\varepsilon]=0 for all j, then \widehat{\mathbf{\beta}} is an unbiased estimator of \mathbf{\beta} with smallest variance[3] among unbiased linear estimators. Furthermore, if we cannot get normality at finite distance, asymptotically this estimator is Gaussian, with \sqrt{n}(\widehat{\mathbf{\beta}}-\mathbf{\beta})\overset{\mathcal{L}}{\rightarrow}\mathcal{N}(\mathbf{0},\mathbf{\Sigma})as n\rightarrow\infty, for some matrix \mathbf{\Sigma}.
The condition of having a full rank \mathbf{X} matrix can be (numerically) strong in large dimensions. If it is not satisfied, (\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T does not exist. If \mathbb{I} denotes the identity matrix, however, it should be noted that (\mathbf{X}^T \mathbf{X}+\lambda\mathbb{I})^{-1}\mathbf{X}^T still exists, whatever \lambda>0. This estimator is called the ridge estimator of level \lambda (introduced in the 1960s by Hoerl (1962), and associated with a regularization studied by Tikhonov (1963)). This estimator naturally appears in a Bayesian econometric context.

Residuals

It is not uncommon to introduce the linear model from the distribution of the residuals, as we mentioned earlier. Also, equation (1) is written as often: y_i=\beta_0+\mathbf{x}_i^T\mathbf{\beta}+\varepsilon_i~~~~(3)where \varepsilon_i’s are realizations of independent and identically distributed random variables (i.i.d.) from some \mathcal{N}(0,\sigma^2) distribution. With a vector notation, we will write \mathbf{\varepsilon}\overset{\mathcal{L}}{\sim}\mathcal{N}(\mathbf{0},\sigma^2\mathbb{I}) . The estimated residuals are defined as: \widehat{\varepsilon}_i =y_i-[\widehat{\beta}_0+\mathbf{x}_i^T\widehat{\mathbf{\beta}}] Those (estimated) residuals are basic tools for diagnosing the relevance of the model.

An extension of the model described by equation (1) has been proposed to take into account a possible heteroscedastic character: (Y\vert \mathbf{X}=\mathbf{x})\overset{\mathcal{L}}{\sim}\mathcal{N}(\mu(\mathbf{x}),\sigma^2(\mathbf{x}))where \sigma^2(\mathbf{x}) is a positive function of the explanatory variables. This model can be rewritten as: y_i=\beta_0+\mathbf{x}_i^T\mathbf{\beta}+\sigma^2(\mathbf{x}_i)\cdot\varepsilon_iwhere residuals are always i.i.d., with unit variance, \varepsilon_i=\frac{y_i-[\beta_0+\mathbf{x}_i^T\mathbf{\beta}]}{\sigma(\mathbf{x}_i)} While residuals based equations are popular in linear econometrics (when the dependent variable is continuous), it is no longer popular in counting models, or logistic regression.

However, writing using an error term (as in equation (3)) raises many questions about the representation of an economic relationship between two quantities. For example, it can be assumed that there is a relationship (linear to begin with) between the quantities of a traded good, q and its price p. This allows us to imagine a supply equationq_i=\beta_0+\beta_1 p_i+u_i(u_i being an error term) where the quantity sold depends on the price, but in an equally legitimate way, one can imagine that the price depends on the quantity produced (what one could call a demand equation), p_i=\alpha_0+\alpha_1 q_i+v_i(v_i denoting another error term). Historically, the error term in equation (3) could be interpreted as an idiosyncratic error on the variable y, the so-called explanatory variables being assumed to be fixed, but this interpretation often makes the link between an economic relationship and a complicated economic model difficult, the economic theory speaking abstractly about a relationship between a magnitude, the econometric model imposing a specific shape (what magnitude is y and what magnitude is x) as shown in more detail in Morgan (1990) Chapter 7.

(references mentioned above are online here). To be continued…

[1] This approach can be compared to structural econometrics, as presented for example in Kean (2010).

[2] Here, we will try to distinguish \beta_0, the intercept, and the other parameters \mathbf{\beta}, since they are considered differently in many extensions (e.g. regularization). Nevertheless, in many expressions \mathbf{\beta} will denote the joint vector (\beta_0, \mathbf{\beta}), for general formulas, to avoid too heavy notations.

[3] In the sense that the difference between variance matrices is a positive matrix.

Gini Regressions and Heteroskedasticity

Our joint paper “Gini Regressions and Heteroskedasticity” with Ndéné Ka, Stéphane Mussard and Oumar Hamady Ndiaye just appear in Econometrics.

We propose an Aitken estimator for Gini regression. The suggested A-Gini estimator is proven to be a U-statistics. Monte Carlo simulations are provided to deal with heteroskedasticity and to make some comparisons between the generalized least squares and the Gini regression. A Gini-White test is proposed and shows that a better power is obtained compared with the usual White test when outlying observations contaminate the data

Histogramme et densité en échelle logarithmique

Il y a presque 20 ans, Paul-André Rosental, Gilles Postel-Vinay, Akiko Suwa-Eisenmann et Jérôme Bourdieu publiaient Migrations et transmissions inter-générationnelles dans la France
du XIXe et du début du XXe siècle qui présentait le graphique ci-dessous,

On est tombé sur ce graphique avec Ewen Gallic en écrivant notre article Using Collaborative Genealogy Data to Study Migration (qui paraîtra très bientôt dans le journal The History of the Family). Classiquement dans un article académique, on a besoin de se positionner par rapport à la littérature existante. Avec nos propres données, et nous avions construit l’histogramme suivant

histoire de montrer que nous retrouvions cette forme bimodale. Depuis le début, je suis pas très confortable avec ce graphique, mais on l’a laissé non pas pour ses vertus sémiologiques, mais parce qu’il valide notre approche, en obtenant des résultats comparables à d’autres, déjà établis dans la littérature.

Je repensais à tout ça avant hier, en mentionnant sur twitter le graphique suivant

qui contient quelque chose de similaire, à savoir une espèce d’histogramme en bleu, sauf que la largeur des bandes décroit ici de manière logarithmique…

Je suis très mal à l’aise face à ces graphiques, parce que je ne sais pas comment les interpréter.  Mais peut-être faut-il revenir à la base, pour comprendre ce qu’on fait.

On a un échantillon \{x_1,\cdots,x_n\} disons de montant d’impôt payé. On a un échantillon comme on dit en statistique descriptive. En statistique mathématique, on suppose que les x_i sont des réalisations de variables aléatoires. On pourrait ainsi dire que x_i=X(\omega_i)X est une variable aléatoire définie sur un espace probabilisé (\Omega,\mathcal{A},\mathbb{P}). Pour rappel, \Omega, c’est l’univers, une espèce d’espace fondamental abstrait. Sur le site les-mathématiques, on nous dit que \Omega est l’espace des observables, ce qui me déplait assez car pour moi, justement \Omega est un espace abstrait. En théorie de la décision, si on reprend par exemple Probability and Uncertainty in Economic Modeling, on parle d’états de la nature (c’est aussi la terminologie que l’on retrouve dans States of Nature and the Nature of States). Et assez souvent, on oublie cet espace, grâce au théorème de transfert : en effet, si on a une variable aléatoire réelle, X:\Omega \rightarrow \mathbb {R} , alors {\displaystyle \mathbb {E} \left[\varphi (X)\right]=\int _{\Omega }\varphi {\big (}X(\omega ){\big )}\mathbb {P} (\mathrm {d} \omega )=\int _{\mathbb {R} }\varphi (x)\mathbb {P} _{X}(\mathrm {d} x)} ce qui veut dire, de manière polie, qu’on ne se place jamais sur ces espace fondamental, mais on regarde juste leur “transfert” sur la droite réelle (ici, je vais me contenter du cas unidimensionnel), c’est à dire la “valeur” prise par la variable aléatoire (je reviendrai tout à l’heure sur ces écritures sous forme intégrales). C’est un abus de langage que l’on fait quand on dit que pour un lancer de dé, l’univers est \Omega = \{1, 2, 3, 4, 5, 6\}, correspondant aux valeurs des faces du dé. En fait, l’espace fondamental, des “états de la nature” peut être plus compliqué que ça, mais pour mieux comprendre, on va le “transférer” sur l’espace comptable \{1, 2, 3, 4, 5, 6\}. Mais je commence à m’écarter du sujet, surtout que je m’étais promis que je ne ferais pas (trop) de théorie de la mesure. L’autre raison est que je pense que ce n’est pas exactement la formalisation qu’on utilise en statistique. Je pense que x_i=X_i(\omega) où les variables X_1,\cdots,X_n sont des variables aléatoires indépendantes, et de même loi. Pourquoi ? Avec les notations précédentes, on dispose de l’échantillon suivant

que l’on peut voir comme une réalisation des variables aléatoires suivantes

Une statistique sera alors une fonction construit sur notre échantillon, \hat{\theta}=h(x_1,\cdots,x_n) , par exemple la moyenne (empirique)

qui est alors un nombre. Mais ça peut être aussi une variable aléatoire, \hat{\theta}=h(X_1,\cdots,X_n) , soit ici, pour la moyenne

La principale difficulté est qu’en statistique mathématique, on utilise la même notation, \widehat{\theta} pour deux objets de nature très différentes. Et donc avoir des abus de langage, en pouvait dire en cours “ici la moyenne vaut 37.5%” et ensuite demander de “calculer la variance de la moyenne”. Mais là encore, je m’écarte. Revenons à notre variable aléatoire X. Souvent, on va essayer de décrire sa loi de probabilité.

Le plus simple est de passer par la fonction de répartition, définie sans ambiguïté par F(x) = \mathbb{P}[\{\omega\in\Omega:X(\omega)\leq x\}] (ce qui a du sens cas la mesure de probabilité \mathbb{P} est effectivement définie sur l’espace des états de la nature \Omega. Mais par abus de langage, on se contentera de noter F(x) = \mathbb{P}[X\leq x] . Supposons maintenant que notre variable X est absolument continue. Alors dans ce cas – ça correspond je pense au premier théorème de l’analyseF(x)=\int _{-\infty}^{x}f(t) dt, où f est alors la densité de la variable aléatoire. Le point un peu subtile ici est que l’intégrale est ici définie par rapport à la mesure de Lebesgue (ce qui permet de définir proprement ce qu’est ce dt à la fin de l’intégrale). Mais oublions ce point un instant.

On arrive enfin au premier point auquel je voulais arriver : pour les variables (absolument) continue, on peut “représenter” leur loi par leur densité. C’est le dessin ci-dessus, si X est une variable qui suit une loi lognormale

u=seq(0,1000,length=251)
v=dlnorm(u,5,1)
plot(u,v,type="l",xlab="",ylab="")

L’interprétation est simple : comme la probabilité \mathbb {P} (a<X\leq b) se calcule alors par la relation suivante : \mathbb {P} \left(a<X\leq b\right)=\int _{a}^{b}f(t)dtla probabilité \mathbb {P} (a<X\leq b) se lit comme l’aire sous la courbe sur l’intervalle [a, b].

Parmi les estimateurs usuels de la densité, on peut utiliser un histogramme. Même si c’est un objet que tout le monde manipule depuis ses cours au secondaire, la construction formelle de cet objet est un peu technique. En effet, le premier point est qu’il faut découper l’ensemble des valeurs prises par les observations x_i – disons (a,b] – en une partition, c’est à dire un ensemble d’intervalles disjoints qui vont recouvrir  (a,b], I_1,\cdots,I_k. Le plus simple est d’avoir une partition régulière, I_1=(a,a+(b-a)/k], I_2=(a+(b-a)/k,a+2(b-a)/k], etc. Généralement, I_j=(a+(b-a)(j-1)/k,a+(b-a)j/k]. Et classiquement, on construit un histogramme en comptant le nombre d’observations dans chacun des intervalles,

x=rlnorm(500,5,1)
hist(x,xlim=c(0,1000),breaks=seq(0,10000,by=100))

C’est l’histogramme classique, avec un pas de 100.

hist(x,xlim=c(0,1000),breaks=seq(0,10000,by=100),probability=TRUE,ylim=c(0,.005))
lines(u,v,col="red")

Pour avoir une densité (i.e. une “fonction en escalier qui va s’intégrer à 1”) il faut normaliser la hauteur (en divisant par le pas). On peut d’ailleurs superposer la densité de la loi lognormale.

Dans les graphiques que je présentais, on utilise en échelle logarithmique en abscisse. On peut le faire avec R en demandant

plot(u,v,log="x",type="l")

ce qui correspond aux graphiques précédant. On peut aussi, à la main, dire qu’on veut non plus voir \{x,f(x)\} mais \{\log(x),f(x)\}.

plot(log(u),v,type="l")

L’aire précédente devient alors

Visuellement, on raisonne par comparaison. Quand on dit que la densité est une aire sous la courbe, en réalité, c’est la proportion de l’aire sous la courbe (par rapport à l’aire totale) qui nous parle. Mais autant le faire avec deux aires, pour mieux comprendre. A droite, en rouge, on a la probabilité d’avoir une valeur entre 200 et 600, soit ici environ 30%.

plnorm(600,5,1)-plnorm(200,5,1)
[1] 0.3015131

et en bleu, on a la probabilité d’être entre 92 et 200, qui vaut là aussi environ 30%

plnorm(200,5,1)-plnorm(92,5,1)
[1] 0.3010197

Ici les deux aires sont égales. On notera que ce n’est pas forcément évident, au premier coup d’œil, et l’asymétrie (vers la droite) laisse à croire que l’aire rouge est peut-être plus grande. Mais passons.

Considérons maintenant notre échelle logarithmique, en abscisse,

ou alors la version sur le logarithme des abscisses,

Comme on le notait auparavant, on aime ce graphique car on reconnait une forme de densité de “loi normale” (ce qui pourrait faire du sens, car on obtient une loi lognormale justement en prenant l’exponentielle d’une loi normale…). Cela dit, on a clairement un problème d’échelle ici, car l’aire totale sous la courbe vaut moins de 1%… autrement dit, on est loin de 1. Mais on peut quand même retrouver une loi normale,

lines(u2,dnorm(u2,4,1)/90,col="blue")

(on pourra noter d’ailleurs que non seulement, il a fallu diviser par 90, mais la moyenne est ici 4, et pas 5). Comme on a la densité d’une loi normale (à une transformation près), on peut facilement faire des calculs, en particulier, calculer les deux aires, rouges et bleues

range(log(c(u[I],rev(u[I]))))
[1] 5.300315 6.396263
pnorm(range(log(c(u[I],rev(u[I])))),4,1)
[1] 0.9032535 0.9917184
diff(pnorm(range(log(c(u[I],rev(u[I])))),4,1))
[1] 0.08846485
diff(pnorm(range(log(c(u[I2],rev(u[I2])))),4,1))
[1] 0.2019666

Aussi, l’aire bleue vaut ici plus du double de l’aire rouge. Autrement dit, si on interprète la courbe comme une densité sur le graphique ci-dessous

on a intuitivement envie de dire qu’il y a 2 fois plus de chance d’avoir une valeur entre 92 et 200 qu’entre 200 et 600. Ce qui est faux.

Si on essaye de comprendre un peu mieux, je vais revenir sur deux points… La première c’est que, aussi étrange que ça puisse paraître, c’est presque un coup de chance qu’en prenant une échelle logarithmique, on obtient une densité lognormale. On avait écrit un article avec Emmanuel Flachaire (Log-Transform Kernel Density Estimation of Income Distribution) qui revenait sur l’importance de cette transformation logarithmique. Mais ici, c’est un peu différent. On a en effet deux graphiques : le premier c’est \{x,f(x)\} alors le second c’est \{\log(x),f(x)\}, soit, en faisant un changement de variable y=\log(x) (ou x=e^y), c’est \{y,f(e^y)\}. Or dans le premier cas, on avait une loi lognormale, autrement dit f(x )=\frac {1}{x\sigma {\sqrt {2\pi }}}\exp \left(-\frac {(\log x-\mu )^{2}}{2\sigma ^{2}}\right)={\frac {1}{x}}\phi(\log(x);\mu ,\sigma^2 )\phi est la densité de la loi normale centrée réduite. Si on regarde le second graphique, \{y,f(e^y)\}, on représente alors \{y,e^{-y}\phi(y;\mu ,\sigma^2 )\}. La magie va opérer parce qu’on peut rentrer le e^{-y} dans la densité de la loi normale, ce qui donnera deux choses (1) une translation de la moyenne, (2) un facteur multiplicatif sur la densité. Ce sont les deux phénomème que nous avions observé ici. Si on détaille un peu e^{-y}\exp \left(-\frac {(y-\mu )^{2}}{2\sigma ^{2}}\right)=\exp \left(-\frac {(y-[\mu-\sigma^2] )^{2}}{2\sigma ^{2}}+\star\right)où on retrouve la translation de 1 de moyenne (centrée sur 4 et non plus sur 5, mais c’est logique puisqu’on avait pris \sigma^2 ayant pour valeur 1. Je laisse les plus courageux calculer le \star qui va donner le facteur multiplicatif. Le second point est un peu plus technique, il est lié à un problème de mesure. Dans le premier cas, quand on faisait un calcul d’intégrale, on avait un dx et dans le second cas, on calcule des aires avec un dy, mais ce n’est pas la bonne transformation. En effet, si y=\log(x) (ou x=e^x) alors dx=e^ydy. Plus formellement, dans le premier cas, quand on calculait \mathbb{P}[X\in[a,b]] on calculait l’intégrale \int _{a}^{b}f(x) dx. Dans le second cas, on calcule \int _{\alpha}^{\beta}f(e^y) dy puisqu’on visualise la courbe \{y,f(e^y)\}. Mais si on fait un changement de variable propre \int _{\alpha}^{\beta}f(e^y) dy=\int _{a}^{b}f(x) \frac{dx}{x}qui n’est plus du tout l’intégrale qu’on cherche à calculer. Je pense que cette histoire de dy correspondant à x^{-1}dx pourrait avoir une interprétation en terme de changement de mesure (la mesure de référence n’est plus la mesure de Lebesgue, qui nous donne une relative uniformité sur l’axe des abscisses, et permet de faire un lien avec l’intégrale “classique” – au sens de Rieman).

Bref, cette histoire de mettre une échelle logarithmique en abscisse pour visualiser une densité est très perturbant. L’intuition que l’on peut en avoir est biaisée, et l’objet mathématique que l’on créé est complexe…

Networks to reinvent insurance?

The theory of networks, or graphs, was born in 1735, following the work of Leonard Euler, who tried to find a walk – starting from a given point – that would bring us back to that point by passing once and only once through each of the seven bridges in the city of Königsberg. These networks can be compared to metro networks, consisting of stations (nodes), linked between two by rails, or not, or more generally a road network, which can give rise to congestion studies, for example. But today, networks are mainly social, connecting people through friendships, professional, family, or monetary ties. Network analysis makes it possible to create relatively homogeneous communities, accepting to share a risk, recreating a mutualisation.

Network and credit

In genealogy, we will have hierarchical networks, a child being linked to his parents, who are themselves linked to their parents. In sociology, social networks make it possible to analyze the links between individuals (or organizations) within a group. Friendships can be studied in a schoolyard (a link that could be an invitation to a birthday party) or e-mail exchanges in a company (the Enron e-mail database has been widely used, with over 180,000 messages exchanged between 36,000 employees). Figure 1 shows two networks of 20 individuals (A, B, …, T).

 

Figure 1: Random networks, 20 nodes (Watts-Strogatz and Barbasi)

In a Facebook or Linkedin type vision, we will say that E and F are linked, in the sense of “friends”, if there is a segment linking points E and F. A network can be directed, for example if we study the exchange of messages (E wrote to F), or money loans (E lent money to F). If historically only adjacency was studied (existence or not of links), we can now add weights, for example the amount of a financial loan. Babutsidze (2012) thus studies the positions of French and German banks in interbank lending within the European zone (the nodes are then the banks). The study of networks within village communities in developing countries has led to a better understanding of informal finance mechanisms. Banerjee et al (2013) study the dissemination of information in a network, and more particularly microfinance loans.

While networks are useful for better organizing microcredit, CNN noted in 2015 that Facebook allowed credit organizations to use a borrower’s social network to determine whether or not it represents a good credit risk. In particular, if friends’ credit scores were too low, a person could be denied credit. This situation is dangerous because of the particular properties of networks, and more particularly the paradox of friends.

From the very small world to the paradox of friends

In 1929, Frigyes Karinthy hypothesized that any person on earth could be connected to any other person by a succession of individual relationships involving at most 6 links. “We should select anyone from the world’s 1.5 billion people, anyone, anywhere. It seems that, using no more than five individuals, one of whom is a personal acquaintance, he could contact the chosen individuals using nothing other than the network of personal acquaintances. This theory of six handshakes originated in a new literary novel. It will be necessary to wait for the work of Michael Gurevich in the 1960s, then Stanley Milgram ten years later, to see the first attempts to quantify these relationships appear, under the name “Small World Problem”.

While Leskovec & Horvitz (2008) confirmed this order of magnitude, by analyzing several billion messages exchanged using the Windows Live Messenger platform, more recently, Baghat et al (2016) estimated that any two people on Facebook were connected by an average of three and a half people. On the random network on the left, a person has, on average, 2 friends, while a random friend has, on average, 2.25 friends. On the right-hand network, the gap is even greater, because if there too a person has, on average, 2 friends, a random friend will have on average more than 4 friends.

 

Figure 2: Random networks, 500 nodes (Watts-Strogatz and Barbasi)

This paradox, observed in 1991 by sociologist Scott Feld, is very easily demonstrated. Heuristically, we can see a link with the probabilistic property \frac{\mathbb{E}[X^2]}{\mathbb{E}[X]}=\mathbb{E}[X]+\frac{\text{Var}[X]}{\mathbb{E}[X]}>\mathbb{E}[X] where the term on the left is the number of friends of my friends, divided by my number of friends. The difference is all the greater the greater the dispersion of the number of friends. If the left-hand network is very dense, the right-hand network on the other hand has a power law property: the distribution of the number of friends follows a power law (or Zipf law, or Pareto’s law). Figure 3 shows the distribution of the number of friends on a network, in a double logarithmic scale: linearity indicates a distribution according to power. This type of distribution can be found in a very large number of networks, particularly Facebook, as shown by Wohlgemuth & Matache (2014).

 

Figure 3: Distribution of the number of friends on simulated random networks (Watts-Strogatz and Barbasi in red)

The classic interpretation is that some people are central in the network, with a very large number of connections. This property is well known in marketing (we will then speak of a “peer effect“) but it also has impacts in risk management or public health. Chrisakis & Fowler (2010) have shown that influenza epidemics can be detected almost two weeks in advance, by monitoring the infection in a social network. In particular, the analysis of the health of central people in a network is “an ideal way to predict outbreaks, but detailed information doesn’t exist for most groups, and to produce it would be time-consuming and costly”. To return to the example of the credit score, if it is found to be correlated to the number of friends, the friends paradox makes it dangerous to use the friends’ score as an indicator of an individual’s risk!

The importance of homophilia

Another important feature of networks is the notion of homophilia, introduced in 2001 in sociology by two important articles, corresponding to the tendency to be connected to one’s peers. McPherson et al (2001) assumed that similarity generates connection, and therefore people’s personal networks are homogeneous across many socio-demographic, behavioural and intrapersonal characteristics. Moody (2001) studied friendships in elementary school playgrounds in the United States, with a focus on interracial friendships. Easley & Kleinberg (2010) thus presents a number of consequences of homophilia, ranging from the creation of tables at business meals to the granting of credit in the United States. The measurement of homophilia is the same as asking, taking into account pre-existing groups (according to gender, age, socio-professional category, etc.) how the links are distributed, between groups, or within groups.

 

Figure 4: Low homophilia (left) and high homophilia (right)

In an insurance context, an actuary seeks to create tariff classes, groups that are homogeneous in terms of risks, according to explanatory variables (the so-called tariff variables). People who live in the same place, drive the same types of vehicles, and have the same characteristics, are likely to be in the same class. But if homophilia exists in a population, a tariff group could perhaps be observed on a network of friends. Why not then consider creating groups within a network?

Using insurance networks

In this spirit, in 2010, Friendsurance was launched in Germany and has more than 100,000 insured in 2018. In France, a short collaborative insurance experiment was launched in 2015, with Inspeer, offering to share damage insurance deductibles (in car or home insurance) with friends. These types of collaborative insurance, sometimes called peer-to-peer insurance, are based on the formation of small groups by a broker. A portion of the insurance premiums paid is paid to a group fund, the other portion to a third party insurance company. Minor damage suffered by the policyholder is first covered by this group fund. For claims exceeding the deductible, the usual insurer is used. A group can be formed by the insured, forming a social network a bit like Facebook. In this model, the only requirement is that all group members must have the same type of insurance (e. g. liability insurance with legal expenses insurance).

As Schiller (2013) noted, this type of mechanism has many virtues, the first being to reduce costs and the risk of fraud. There is no tendency to cheat on the cost of a claim when the risk is borne by family members or friends. The anonymity of mutuality that exists in the law of large numbers is disappearing. But aren’t we reinventing version 2.0. of the tontine associations, with the strong return of risk sharing within close-knit communities?

References

Joshua Angrist. The perils of peer effects. Labor Economics, 30, 98-108, 2014

Zakaria Babutsidze. Positions of French and German Banks in European interbank lending network. OFCE, Mars 2012.

Abhijit Banerjee, Arun Chandrasekhar, Esther Duflo & Matthew Jackson. Diffusion of Microfinance. Science, 341,

Smriti Bhagat, Moira Burke, Carlos Diuk, Ismail Onur Filiz & Sergey Edunov. Three and a half degrees of separation. Facebook Research, 2016.

Ananya Bhattacharya. Facebook patent: Your friends could help you get a loan – or not. 4 août 2015, CNN,

Nicholas Christakis & James Fowler Social Network Sensors for Early Detection of Contagious Outbreaks. PLoS ONE. 5 (9): e12948. arXiv:1004.4792 2015

David Easley & Jon Kleinberg. Networks, Crowds, and Markets. Cambridge University Press. 2010.

Scott Feld. Why your friends have more friends than you do, American Journal of Sociology, 96 (6): 1464–1477, 1991.

Matthew Jackson. Social and Economic Networks. Princeton University Press, 2010.

Jure Leskovec & Eric Horvitz. Planetary-Scale Views on a Large Instant-Messaging Network. Microsoft Research, 2008.

Miller McPherson, Lynn Smith-Lovin & James Cook. Birds of a Feather: Homophily in Social Networks. Annual Review of Sociology. 27: 415–444, 2001.

James Moody. Race, School Integration, and Friendship Segregation in America. American Journal of Sociology, 107 (3): 679-716, 2001.

Wesley Perkins, Michael Haines & Richard Rice. Misperceiving the college drinking norm and related problems: a nationwide study of exposure to prevention information, perceived norms and student alcohol misuse. Journal of Studies on Alcohol 66 (4) : 470-478, 2005.

Ben Schiller. A Social Network For Insurance That Cuts Costs And Reduces Fraud. Fast Company, October 2013,

Brad Walker. How Peer-to-Peer Companies Are Transforming the Insurance Sector. The Street, Avril 2016,

Jason Wohlgemuth & Mihaela Matache. Small-World Properties of Facebook Group

Networks. Complex Systems, 23 (2014).

[i] Complete data can be downloaded from https://snap.stanford.edu/data/email-Enron.html

[ii] https://www.friendsurance.com/ and https://www.inspeer.me/ respectively

Les réseaux pour réinventer l’assurance ?

La théorie des réseaux, ou des graphs, est née en 1735, suite aux travaux de Léonard Euler, qui essayait de trouver une promenade – à partir d’un point donné – qui fasse revenir à ce point en passant une fois et une seule par chacun des sept ponts de la ville de Königsberg. On peut rapprocher ces réseaux des réseaux de métro, constitués de stations (les nœuds), liés entre deux par des rails, ou pas, ou plus généralement un réseau routier, pouvant donner lieu à des études de congestion, par exemple. Mais les réseaux sont aujourd’hui surtout sociaux, reliant les personnes, par des liens d’amitiés, professionnels, familiaux, ou monétaires. L’analyse des réseaux permet de créer des communautés relativement homogène, acceptant de partager un risque, recréant une mutualisation.

Réseau et crédit

En généalogie, on aura des réseaux hiérarchiques, un enfant étant lié à ses parents, eux-mêmes reliés à leurs parents. En sociologie, les réseaux sociaux permettent d’analyser les liens entre des individus (ou des organisations) au sein d’un ensemble. On pourra étudier les amitiés dans une cour d’école (un lien pouvant être une invitation à un anniversaire) ou des échanges de messages électroniques dans une entreprise (la base des courriels d’Enron a ainsi été abondamment utilisée, avec plus de 180 000 messages échangés entre 36 000 employés[i]). La Figure 1 montre ainsi deux réseaux de 20 individus (A, B, …, T).

Figure 1 : réseaux aléatoires, 20 nœuds (Watts-Strogatz et Barbasi),

Dans une vision de type Facebook ou Linkedin, on dira que E et F sont liés, au sens « amis », s’il existe un segment reliant les points E et F. Un réseau peut être dirigé, par exemple si on étudie les échanges de messages (E a écrit à F), ou des prêts d’argent (E a prêté de l’argent à F). Si historiquement seule l’adjacence était étudiée (existence ou non de liens), on peut aujourd’hui rajouter des poids, par exemple le montant d’un prêt financier. Babutsidze (2012) étudie ainsi les positions de banques françaises et allemandes dans les prêts interbancaires au sein de la zone Europe (les nœuds sont alors les banques). L’étude des réseaux au sein de communautés de villages dans les pays en développement a permis de mieux comprendre les mécanismes de finance informelle. Banerjee et al. (2013) étudient ainsi la diffusion de l’information dans un réseau, et plus particulièrement les prêts de microfinance.

Si les réseaux sont utiles pour mieux organiser le microcrédit, CNN notait en 2015 que Facebook permettait à des organismes de crédit d’utiliser le réseau social d’un emprunteur pour déterminer s’il représente un bon risque de crédit, ou pas. En particulier, si le score de crédit des amis était trop faible, une personne pouvait se voir refuser un crédit. Cette situation est dangereuse à cause de propriétés particulières des réseaux, et plus particulièrement le paradoxe des amis.

Du tout petit monde au paradoxe des amis

En 1929, Frigyes Karinthy a émis l’hypothèse que toute personne sur terre pouvait être relliée à n’importe quelle autre par une succession de relations individuelles comprenant au plus 6 maillons. « Nous devrions sélectionner n’importe quelle personne du 1,5 milliard d’habitants de la planète, n’importe qui, n’importe où. Il paraît que, n’utilisant pas plus de cinq individus, l’un d’entre eux étant une connaissance personnelle, il pourrait contacter les individus choisis en n’utilisant rien d’autre que le réseau des connaissances personnelles ». Cette théorie des six poignées de main a vu son origine  dans une nouvelle littéraire. Il faudra attendre les travaux de

Michael Gurevich dans les années 60, puis Stanley Milgram dix ans après, pour voir apparaître les premières tentatives de quantification de ces relations, sous le nom de « Small World Problem ». Si Leskovec & Horvitz (2008) ont confirmé cet ordre de grandeur, via l’analyse de plusieurs milliards de messages échangés à l’aide de la plateforme Windows Live Messenger, plus récemment, Baghat et al. (2016) ont estimé que deux personnes quelconques sur Facebook étaient connectées par une moyenne de trois personnes et demi. Sur le réseau aléatoire de gauche, une personne a, en moyenne, 2 amis, alors qu’un ami pris au hasard a en moyenne 2.25 amis. Sur le réseau de droite, l’écart est encore plus important, car si là aussi une personne a, en moyenne, 2 amis, un ami pris au hasard aura en moyenne plus de 4 amis.

Figure 2 : réseaux aléatoires, 500 nœuds (Watts-Strogatz et Barbasi)

Ce paradoxe, observé en 1991 par le sociologue Scott Feld se démontre très facilement. Heuristiquement, on peut voir un lien avec la propriété probabiliste\frac{\mathbb{E}[X^2]}{\mathbb{E}[X]}=\mathbb{E}[X]+\frac{\text{Var}[X]}{\mathbb{E}[X]}>\mathbb{E}[X]où le terme de gauche est le nombre d’amis de mes amis, divisé par mon nombre d’amis. La différence est d’autant plus grande que la dispersion du nombre d’amis est importante. Si le réseau de gauche est très dense, celui de droite en revanche possède une propriété de loi puissance : la distribution du nombre d’amis suit une loi en fonction puissance (ou loi de Zipf, ou de Pareto). La Figure 3 montre la distribution du nombre d’amis sur un réseau, dans double échelle logarithmique : la linéarité indique une distribution en fonction puissance. On retrouve ce genre de distribution dans un très grand nombre de réseaux, en particulier Facebook, comme l’a montré Wohlgemuth & Matache (2014).

Figure 3 : distribution du nombre d’amis sur des réseaux aléatoires simulés (Watts-Strogatz et Barbasi en rouge)

L’interprétation classique est que certaines personnes sont centrales dans le réseau, avec un très grand nombre de connexions. Cette propriété est très connue en marketing (on parlera alors d’effet de pair, « peer effect ») mais elle a des impacts aussi en gestion des risques, ou en santé publique. Chrisakis & Fowler (2010) ont ainsi montré que les épidémies de grippe peuvent être détectées près de deux semaines en avance, en surveillant l’infection dans un réseau social. En particulier, l’analyse de la santé des personnes centrales dans un réseau est « an ideal way to predict outbreaks, but detailed information doesn’t exist for most groups, and to produce it would be time-consuming and costly ». Pour revenir à l’exemple du score de crédit, si ce dernier se trouve être corrélé aux nombres d’amis, le paradoxe des amis rend dangereuse l’utilisation du score des amis comme indicateur du risque d’un individu !

L’importance de l’homophilie

Un autre trait important des réseaux est la notion d’homophilie, introduite en 2001 en sociologie par deux articles importants, correspondant à la tendance à être connecté à ses semblables. McPherson et al (2001) partait du principe que la similitude engendre la connexion, et par conséquent, les réseaux personnels des gens sont homogènes sur de nombreuses caractéristiques sociodémographiques, comportementales et intrapersonnelles. Moody (2001) étudiait ainsi les amitiés dans les cours de récréations à l’école primaire, aux États-Unis, et plus particulièrement les amitiés interraciales. Easley & Kleinberg (2010) présente ainsi de nombres conséquences de l’homophilie, allant de la constitution des tables lors de repas d’affaire, à l’attribution de crédit aux États-Unis. La mesure de l’homophilie revient à se demander, compte tenu de groupes préexistants (en fonction du genre, de l’âge, de la catégorie socioprofessionnelle, etc) comment se répartissent les liaisons, entre les groupes, ou à intérieur des groupes.

Figure 4 : faible homophilie (en haut) et forte homophilie (en bas).

Dans un contexte d’assurance, un actuaire cherche à créer des classes tarifaires, des groupes homogènes en termes de risques, suivant des variables explicatives (les variables dites tarifaires). Les personnes qui habitent au même endroit, qui conduisent les mêmes types de véhicules, et qui ont les mêmes caractéristiques, auront de fortes chances d’être dans la même classe. Mais si l’homophilie existe dans une population, un groupe tarifaire pourrait peut-être s’observer sur réseau d’amis. Pourquoi ne pas alors envisager de créer des groupes au sein d’un réseau ?

Utiliser les réseaux en assurance

Dans cet esprit, en 2010, Friendsurance a été lancé en Allemagne et compte en 2018 plus de 100000 assurés[i]. En France, une courte expérience d’assurance collaborative avait été lancée en 2015, avec Inspeer, proposant de mutualiser avec ses proches les franchises d’assurance dommage (en assurance auto ou habitation) entre amis. Ces types d’assurances collaboratives, parfois appelés assurance pair à pair – ou « peer-to-peer insurance » –  reposent sur la constitution de petits groupes par un courtier. Une partie des primes d’assurance versées est versée à un fonds collectif, l’autre partie à une compagnie d’assurance tierce. Les dommages mineurs subis par le preneur d’assurance sont d’abord pris en charge par ce fonds de groupe. Pour les sinistres dépassant la franchise, il est fait appel à l’assureur habituel. Un groupe peut être constitué par les assurés, formant un réseau social un peu comme Facebook. Dans ce modèle, la seule exigence est que tous les membres du groupe doivent avoir le même type d’assurance (par exemple une assurance responsabilité civile avec une assurance des frais juridiques).

Comme le notait Schiller (2013), ce type de mécanisme a beaucoup de vertus, la première étant de diminuer les coûts, et le risque de fraude. On n’a en effet pas tendance à tricher sur le coût d’un sinistre lorsque le risque est porté par des membres de la famille, ou des amis. L’anonymat de la mutualité qui existe dans la loi des grands nombres disparait. Mais n’est-on pas en train de réinventer la version 2.0. des associations tontinières, avec le retour en force de la mutualisation des risques au sein de communautés soudées ?

Références

Joshua Angrist. The perils of peer effects. Labor Economics, 30, 98-108, 2014

Zakaria Babutsidze. Positions of French and German Banks in European interbank lending network. OFCE, Mars 2012.

Abhijit Banerjee, Arun Chandrasekhar, Esther Duflo & Matthew Jackson. Diffusion of Microfinance. Science, 341,

Smriti Bhagat, Moira Burke, Carlos Diuk, Ismail Onur Filiz & Sergey Edunov. Three and a half degrees of separation. Facebook Research, 2016.

Ananya Bhattacharya. Facebook patent: Your friends could help you get a loan – or not. 4 août 2015, CNN,

Nicholas Christakis & James Fowler Social Network Sensors for Early Detection of Contagious Outbreaks. PLoS ONE. 5 (9): e12948. arXiv:1004.4792 2015

David Easley & Jon Kleinberg. Networks, Crowds, and Markets. Cambridge University Press. 2010.

Scott Feld. Why your friends have more friends than you do, American Journal of Sociology, 96 (6): 1464–1477, 1991.

Matthew Jackson. Social and Economic Networks. Princeton University Press, 2010.

Jure Leskovec & Eric Horvitz. Planetary-Scale Views on a Large Instant-Messaging Network. Microsoft Research, 2008.

Miller McPherson, Lynn Smith-Lovin & James Cook. Birds of a Feather: Homophily in Social Networks. Annual Review of Sociology. 27: 415–444, 2001.

James Moody. Race, School Integration, and Friendship Segregation in America. American Journal of Sociology, 107 (3): 679-716, 2001.

Wesley Perkins, Michael Haines & Richard Rice. Misperceiving the college drinking norm and related problems: a nationwide study of exposure to prevention information, perceived norms and student alcohol misuse. Journal of Studies on Alcohol 66 (4) : 470-478, 2005.

Ben Schiller. A Social Network For Insurance That Cuts Costs And Reduces Fraud. Fast Company, October 2013,

Brad Walker. How Peer-to-Peer Companies Are Transforming the Insurance Sector. The Street, Avril 2016,

Jason Wohlgemuth & Mihaela Matache. Small-World Properties of Facebook Group

Networks. Complex Systems, 23 (2014).

[i] Les données complètes sont en ligne sur https://snap.stanford.edu/data/email-Enron.html

[ii] https://www.friendsurance.com/ et https://www.inspeer.me/ respectivement

“Le mécanisme Cat’ Nat’ va à l’encontre de la prévention”

Au début du mois, https://newsassurancespro.com/ publiait une entrevue… je me permets de remettre ici quelques éléments,

Où en est la modélisation du risque catastrophe en France ?

Beaucoup de progrès ont été faits, notamment par l’apport de l’analyse de géologues et des hydrologues qui ont modélisé l’exposition théorique aux risques catastrophes. Nous disposons aujourd’hui de cartes incroyablement précises qui permettent de connaître la nature des risques rue par rue en France, aussi bien sur l’inondation que sur la sécheresse. Pour autant, cette vision statique résiste mal à une approche en termes de flux. Ainsi, nous avons des historiques de données parmi les plus fournis du monde, mais l’existence d’un simple barrage installé en amont d’une commune peut modifier la réalité du risque sur le terrain.

Le régime des catastrophes naturelles actuel est-il viable ?

Oui. Le régime des catastrophes naturelles est viable. Parce que l’Etat intervient en dernier ressort. Cela solvabilise le dispositif. Cependant, le mécanisme va à l’encontre de la prévention. Certes, le système d’indemnisation, et plus particulièrement de franchises, tient compte de la mise en place ou non de Plan de prévention des risques naturels (PPRN) dans les communes. Mais dans un schéma ex-post, c’est-à-dire après la survenue de l’évènement.

La prévention est le maillon fondamental qui manque au régime tel qu’il est conçu actuellement. Toute réforme qui introduirait une dose de prévention irait dans le bon sens.

La gestion des risques naturels est-elle bien organisée en France ?

Assureurs, communes, régions, Etat… la gestion des risques naturels fait intervenir beaucoup d’interlocuteurs différents. Cela manque de coordination. Ainsi, mettre en place un plan de prévention du risque inondation (PRI) dans une commune est une bonne chose. Mais s’il n’est pas conçu en tenant compte des communes alentours, cela en limite la portée. Il existe des mécanismes de dépendance sur le risque inondation qui ne sont pas circonscrits aux limites géographiques d’une ville ou d’un village.

Faudrait-il aller vers plus de segmentation du risque ?

Force est de constater que le niveau de prime payé par les assurés en catastrophes naturelles est décorrélé du risque auquel ils sont réellement exposés. Il y a quelques mois, notre chaire de recherche (sur la segmentation et la mutualisation) s’était penchée sur la question. Nous devions réfléchir sur les limites éthiques de la segmentation du risque catastrophes. Les calculs actuariels de l’exposition au risque montraient des écarts monstrueux entre les Bouches-du-Rhône et l’Orne par exemple. Le sujet semble tabou chez les assureurs. Mais tarifer en fonction de la réelle nature du risque serait un pourtant puissant outil de prévention. Les entreprises réfléchiraient à deux fois avant de s’installer en zone inondable.

La suite en ligne sur https://newsassurancespro.com/

Exotic link functions for GLMs

In my previous post on GLMs, I discussed power link functions. But there are much more links that can be used :

  • The square root link (for the Poisson model)

Consider some random variable Y with mean \mu and variance \sigma^2. Using Taylor’s expansion,g(Y)\sim g(\mu)+(Y-\mu)g'(\mu)+\frac{1}{2}(Y-\mu)^2g''(\mu)we can write\mathbb{E}[g(Y)]\sim g(\mu)+\frac{\sigma^2}{2}g''(\mu) \text{Var}[g(Y)]\sim [g'(\mu)]^2\sigma^2

Assume that Y\sim\mathcal{P}(\lambda), a consider a square root transformation, g(y)=\sqrt{y}, then the second equality becomes \text{Var}[\sqrt{Y}]\sim \left[\frac{1}{2\sqrt{\mathbb{E}[Y]}}\right]^2\text{Var}[Y]=\frac{1}{4}

So, somehow, with a square-root transformation, we have variance stability, which might be interpreted as some homoscedasticity.

  • The complementary log-log function for the Bernoulli model

Assume that the true variable of interest is a Poisson one, N|\mathbf{X}=\mathbf{x}\sim\mathcal{P}(\lambda_{\mathbf{x}}) where \lambda_{\mathbf{x}}=\exp[\mathbf{x}^T\mathbf{\beta}]Thus,\mathbb{P}[N=0|\mathbf{X}=\mathbf{x}]=\exp[-\lambda_{\mathbf{x}}]=\exp[-(\exp[\mathbf{x}^T\mathbf{\beta}])]while\mathbb{P}[N>0|\mathbf{X}=\mathbf{x}]=1-\exp[-(\exp[\mathbf{x}^T\mathbf{\beta}])]=H(\mathbf{x}^T\mathbf{\beta})where H(s)=1-\exp[-\exp(s)]. Let Y=\mathbf{1}(N>0). The previous model seems like a Bernoulli regression with H as link function,\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=H(\mathbf{x}^T\mathbf{\beta})

So, assume now that instead of observing N we observe Y=\boldsymbol{1}(N>0). In that case, running a Bernoulli regression with a complementary log-log link function would be the same (?) as running first a Poisson regression on the original data, and then use it on our binary variable, zero vs. non-zero. Let us generate some data, and see what’s going on. Let us compare e^{\lambda_{\mathbf{x}}} and p_{\mathbf{x}} obtained from a standard logistic regression

n=563
set.seed(1)
base=data.frame(X1=rnorm(n),X2=rnorm(n))
lambda=base$X1+base$X2
base$Y=rpois(n,exp(lambda))
regPois = glm(Y~.,data=base,family=poisson(link="log"))
lambda = predict(regPois,type="response")
regBinom = glm((Y==0)~.,data=base,family=binomial(link="probit"))
prob = predict(regBinom, type="response")
plot(prob,exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

What if p_{\mathbf{x}} was obtained from a Bernoulli regression, with a cloglog link function ?

regBinom = glm((Y&gt;0)~.,data=base,family=binomial(link="cloglog"))
prob = predict(regBinom, type="response")
plot(prob,1-exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

It looks like the fit is very good here ! Now, what if we have real data, like the dataset from A Theory of Extramarital Affairs, by Ray Fair, published in 1978 in the Journal of Political Economy (with 563 observations, and nine variables)

base = read.table("http://freakonometrics.free.fr/baseaffairs.txt",header=TRUE)
str(base)
x=base$SEX
base$SEX="M"
base$SEX[x=="0"]="F"
x=base$CHILDREN
base$CHILDREN="YES"
base$CHILDREN[x==0]="NO"
regPois = glm(Y~.,data=base,family=poisson(link="log"))
lambda = predict(regPois,type="response")
regBinom = glm((Y==0)~.,data=base,family=binomial(link="probit"))
prob = predict(regBinom, type="response")
plot(prob,exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

In that case the two models are very different. But actually, so is the second one

regBinom = glm((Y&gt;0)~.,data=base,family=binomial(link="cloglog"))
prob = predict(regBinom, type="response")
plot(prob,1-exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

How can we interpret that ? Could it be because the Poisson model is not good ? Actually, if we run a zero-inflated model here,

library(pscl)
regZIP = zeroinfl(Y ~ . | ., data = base)
summary(regZIP)
 
Count model coefficients (poisson with log link):
             Estimate Std. Error z value Pr(&gt;|z|)    
(Intercept) -0.002274   0.048413  -0.047    0.963    
X1           1.019814   0.026186  38.945   &lt;2e-16 ***
X2           1.004814   0.024172  41.570   &lt;2e-16 *** 
Zero-inflation model coefficients (binomial with logit link): 
            Estimate Std. Error z value Pr(&gt;|z|)  
(Intercept) -4.90190    2.07846  -2.358   0.0184 *
X1          -2.00227    0.86897  -2.304   0.0212 *
X2          -0.01545    0.96121  -0.016   0.9872  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Hence, we reject here the Poisson distribution assumption, because of the inflation of zeros… It looks like the cloglog link can be used to check if the Poisson distribution is a good model, or not…

Extracting information from a picture, round 2

Yesterday, I published a post on extracting information from a picture, but it did not work as expected. I claimed that it was because of the original graph I had. More precisely, the was based on some weird projection, and I could not reconcile. So I decide to cheat a little bit, by creating my own map,

Colors are ugly, I know. But I got them using

u = seq(0,1,length=30)
couleurs = rgb(u,rev(u),0,1)

The picture is

url = "https://freakonometrics.hypotheses.org/files/2018/12/chomage3.png"
library(pixmap)
library(png)
IMG = readPNG(url)

I used those colors because it would make things easy when extracting reds and greens…

ROUGE=t(IMG[,,1])[x1:x2,]
ROUGE=ROUGE[,y2:y1]
library(scales)
image(x1:x2,y1:y2,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
VERT=t(IMG[,,2])[x1:x2,]
VERT=VERT[,y2:y1]
image(x1:x2,y1:y2,VERT,col=alpha(colour=rgb(0,1,0,1), alpha = seq(0,1,by=.01)))

Let us see if the contour of France can be overlaid

library(maptools)
library(PBSmapping)
download.file("http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm0.rds","FRA_adm0.rds")
FR=readRDS("FRA_adm0.rds")
library(maptools)
PP = SpatialPolygons2PolySet(FR)
par(mfrow=c(1,1))
PP=PP[(PP$X&lt;=8.25)&amp;(PP$Y&gt;=42.2),]
u=(x1:x2)-x1
v=(y1:y2)-y1
ax=min(PP$X)
bx=max(PP$X)-min(PP$X)
ay=min(PP$Y)
by=max(PP$Y)-min(PP$Y)
PP$X=(PP$X-ax)/bx*max(u)
PP$Y=(PP$Y-ay)/by*max(v)
image(u,v,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
points(PP$X,PP$Y)

We have a perfect match, don’t we…?

Let us now use a shapefile based on départements,

download.file("http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm2.rds","FRA_adm2.rds")
FR2=readRDS("FRA_adm2.rds")
library(maptools)
PP = SpatialPolygons2PolySet(FR2)
image(u,v,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
k=35
pX=(PP$X[PP$PID==k]-ax)/bx*max(u)
pY=(PP$Y[PP$PID==k]-ay)/by*max(v)
points(pX,pY)nge(pX)

For instance, the thirty-fifth polygon is the following

Let us extract the color inside that polygon

u=1:nrow(ROUGE)
v=1:ncol(ROUGE)

The code would be

pX=(PP$X[PP$PID==k]-ax)/bx*max(u)
pY=(PP$Y[PP$PID==k]-ay)/by*max(v)
E=expand.grid(u,v)
M=matrix(point.in.polygon(E[,1],E[,2],pX,pY)&gt;0,length(u),length(v))
image(u,v,ROUGE*M,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
points(pX,pY)

Now, for each département, I extract the average value of red, and the average value of green,

extract_info = function(k){
  pX=(PP$X[PP$PID==k]-ax)/bx*max(u)
  pY=(PP$Y[PP$PID==k]-ay)/by*max(v)
  E=expand.grid(u,v)
  M=matrix(point.in.polygon(E[,1],E[,2],pX,pY)&gt;0,length(u),length(v))
nom=FR2[FR2$OBJECTID ==k,c("NAME_2","CCA_2")]
return(c(as.numeric(nom$CCA_2),sum(ROUGE[M==1])/sum(M),sum(VERT[M==1])/sum(M)))
}
donnees = Vectorize(extract_info)(1:95)
x2=donnees[1,]
y2=donnees[2,]/(donnees[2,]+donnees[3,])
df2=data.frame(dpt=x2,extract=y2)
x1=as.numeric(as.character(baseChomage$no))
y1=baseChomage$chomagePremierTrimestre2017
df1=data.frame(dpt=x1,obs=y1)
df=merge(df1,df2)
plot(df$obs,df$extract)

On the graph below, we have the original values on the x-axis (unemployement, in percent) and the “average value of red”.  Note that points are almost perfectly correlated… The accumulation can be explained because on the original map, different values could have the same color

So far, I can claim that we’ve been able to extract useful information from the original picture.

Consider the case now that the original map was the following one

The picture can be downloaded using the following code

url = "https://freakonometrics.hypotheses.org/files/2018/12/chomage5.png"
library(pixmap)
library(png)
IMG = readPNG(url)

Here, the colors are obtained from a standard palette,

library(pals)
couleurs = rev(brewer.rdylgn(30))

Here again, we use our previous code to extract reds and greens

And if we use our function

extract_info = function(k){
  pX=(PP$X[PP$PID==k]-ax)/bx*max(u)
  pY=(PP$Y[PP$PID==k]-ay)/by*max(v)
  E=expand.grid(u,v)
  M=matrix(point.in.polygon(E[,1],E[,2],pX,pY)&gt;0,length(u),length(v))
nom=FR2[FR2$OBJECTID ==k,c("NAME_2","CCA_2")]
return(c(as.numeric(nom$CCA_2),sum(ROUGE[M==1])/sum(M),sum(VERT[M==1])/sum(M)))
}
donnees = Vectorize(extract_info)(1:95)
x2=donnees[1,]
y2=donnees[2,]/(donnees[2,]+donnees[3,])
df2=data.frame(dpt=x2,extract=y2)
x1=as.numeric(as.character(baseChomage$no))
y1=baseChomage$chomagePremierTrimestre2017
df1=data.frame(dpt=x1,obs=y1)
df=merge(df1,df2)
plot(df$obs,df$extract)

we obtain the following graph

Here again, we have a strong correlation, not to say comonotonic variables (in the sense that ranks are identical). Nice, isn’t it ?

Extracting information from a picture, round 1

This week, I wanted to get information I found on the nice map, below. I could not get access to the original dataset, per zip code… and I was wondering, if (assuming that the map was with high resolution) it was actually possible to extract information, using a simple R function…

As we can see, there is red, and green on the map, and I would love to know which are the green and the red cities, in France. One important issue is actually the background. Here it’s nice, it white… but white is a strange color, achromatic and very light. More specifically, if I search red areas, the background is very red. And very green, too. So, to avoid those issues, I did use gimp to change the background, into black. On the opposite, where it’s black, it’s neither red, nor green !

Let us get the map, and extract information from the file

url="https://f.hypotheses.org/wp-content/blogs.dir/253/files/2018/12/inondation3.png"
download.file(url,"inondation3.png")
image="inondation3.png"
library(pixmap)
library(png)
IMG=readPNG(image)

Information is stored in several matrices – or in arrays.  Dimension 1 is the height of the picture (in pixels), dimension 2 is the width, and the third one is either 1 (red), 2 (green) or 3 (blue), based on the rgb decomposition of each pixel. Then, I try to find the border of the map

nl=dim(IMG)[1]
nc=dim(IMG)[2]
MAT=(IMG[,,1]+IMG[,,2])/2
x=apply(MAT,2,max)
plot(x,type="l")

When it’s null, it means no color on the line of the matrix, i.e. completly black (initially, I used the mean function, but the maximum really behaves like a step function)

y=apply(MAT,1,max)
plot(y,type="l")

Let us find cutoff values, on the left and on the right, on top and on the bottom

image(1:nc,1:nl,t(MAT))
abline(v=min(which(x>.2)),col="blue")
abline(v=max(which(x>.2)),col="blue")
abline(h=min(which(y>.2)),col="blue")
abline(h=max(which(y>.2)),col="blue")

We obtain the following (forget about the fact that – somehow – France is upside-down)

We can zoom-in, just to make sure that our border are fine

par(mfrow=c(1,2))
image(min(which(x>.2))+(-5):5,1:nl,t(MAT)[min(which(x>.2))+(-5):5,])
abline(v=min(which(x>.2))+(-5):5,col="white")
abline(v=min(which(x>.2)),col="blue")
x1=min(which(x>.2))-1

and on the vertical range

image(max(which(x>.2))+(-5):5,1:nl,t(MAT)[max(which(x>.2))+(-5):5,])
abline(v=max(which(x>.2))+(-5):5,col="white")
abline(v=max(which(x>.2)),col="blue")
x2=max(which(x>.2))+1

So far so good. Let us keep the subpart of the picture,

image(x1:x2,y1:y2,t(MAT)[x1:x2,y1:y2])

Now, let us focus on the red part / component of that picture

ROUGE=t(IMG[,,1])[x1:x2,]
ROUGE=ROUGE[,y2:y1]
library(scales)
image(x1:x2,y1:y2,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01))

That’s not bad, isn’t it ? And get can have a similar graph for the green part

VERT=t(IMG[,,2])[x1:x2,]
VERT=VERT[,y2:y1]
image(x1:x2,y1:y2,VERT,col=alpha(colour=rgb(0,1,0,1), alpha = seq(0,1,by=.01)))

Now, I wanted to ajust a map of France on that one. Using shapefiles of administrative regions, it would be possible to get the proportion of red and green parts (départements, cantons, etc). As a starting point (before going to ‘départements’), let us use a standard shapefile for France

library(maptools)
library(PBSmapping)
url="http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm0.rds"
download.file(url,"FRA_adm0.rds")
FR=readRDS("FRA_adm0.rds")
library(maptools)
PP = SpatialPolygons2PolySet(FR)
PP=PP[(PP$X<=8.25)&(PP$Y>=42.2),]
u=(x1:x2)-x1
v=(y1:y2)-y1
ax=min(PP$X)
bx=max(PP$X)-min(PP$X)
ay=min(PP$Y)
by=max(PP$Y)-min(PP$Y)
PP$X=(PP$X-ax)/bx*max(u)
PP$Y=(PP$Y-ay)/by*max(v)
image(u,v,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
points(PP$X,PP$Y)

We try here to rescale it. The left part should be on the left part of the picture as well as the right part. And the same holds for the top, and the bottom,

Unfortunately, even if we change the projection technique, I could not match perfectly the contour of France. I am quite sure that it’s a projection problem ! But I did try a dozen popular ones, with no success… so if anyone has a clever idea…