Tag Archives: online

Hidding values in the output of the summary function for a (linear) regression

Since our Fall 2020 session will be 100% online (and off-site), I have to work hard this summer to prepare online quizz and exams. I started intensively to play with Achim’s awesome r-exams package. But there are still a few things that I wanted to add, so I will post a series of posts on my blogs to keep tracks of updated functions I will write. Most of them a modification of R internal functions, so the code is hard to read. Here is the file, and I will update it frequently

url = "http://freakonometrics.free.fr/onlineExams.R"
source(url)

I have updated the summary function (more precisely the summary.lm function). To see how it works, consider a simple regression

library(car)
reg = lm(prestige ~ women, data=Prestige)
my_summary(reg)
 
Call:
lm(formula = prestige ~ women, data = Prestige)
 
Residuals:
    Min      1Q  Median      3Q     Max 
-33.444 -12.391  -4.126  13.034  39.185 
 
Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 48.69300    2.30760  21.101    2e-16 ***
women       -0.06417    0.05385  -1.192    0.236    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 17.17 on 100 degrees of freedom
Multiple R-squared:  0.014,	Adjusted R-squared:  0.004143
F-statistic:  1.42 on 1 and 100 DF,  p-value: 0.2362

A classical question I ask in my quizz is to hide the p-value of the F-test, and ask what it is (to make sure that students understand the equivalence between the F and the t test, in a simple regression). To hide the p-value, use

my_summary(reg, Fisher=TRUE)
 
Call:
lm(formula = prestige ~ women, data = Prestige)
 
Residuals:
    Min      1Q  Median      3Q     Max 
-33.444 -12.391  -4.126  13.034  39.185 
 
Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 48.69300    2.30760  21.101    2e-16 ***
women       -0.06417    0.05385  -1.192    0.236    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 17.17 on 100 degrees of freedom
Multiple R-squared:  0.014,	Adjusted R-squared:  0.004143
F-statistic:  1.42 on 1 and 100 DF,  p-value: ■■■■■

(and then, in a multiple choice exam, I ask if it is 1%, 5%, 12%, 23%, 47%, for example). That one was easy, since all those lines are based on the cat function, so I just modify it, if necessary

if(Fisher) cat("\nF-statistic:", formatC(x$fstatistic[1L], 
    digits = digits), "on", x$fstatistic[2L], "and", 
        x$fstatistic[3L], "DF,  p-value:", "■■■■■")
    if(!Fisher) cat("\nF-statistic:", formatC(x$fstatistic[1L], 
    digits = digits), "on", x$fstatistic[2L], "and", 
                   x$fstatistic[3L], "DF,  p-value:", format.pval(pf(x$fstatistic[1L], 
                   x$fstatistic[2L], x$fstatistic[3L], lower.tail = FALSE), 
                   digits = digits))

(here I use the unicode ‘black square‘ symbol to hide numbers). Of course, I can hide the value of \sigma, or the (adjusted or not) R ^2, etc.

Now, a little bit more tricky: what if we want to change the regression table, with the coefficients, their standard deviation, etc.  It is tricky since those values are numeric, with an appropriate format (not too many digits), but it can be done easily since that formating is done through the printCoefmat function.  So in my code, I have my internal function, where I ask to put some ‘black square‘ (and the good number to keep a readable format) at some specific locations. Consider a more complex regression

reg = lm(prestige ~ ., data=Prestige)

and assume that we want to hide the value of the intercet, \widehat{\beta}_0 (i.e. located at (1,1) in the matrix) and the p-value of the t-test for the fourth one (i.e. located at (4,4) in the matrix – since the first colum is \widehat{\beta}_3, the second one its standard deviation, the thirst one the t value, and then, the fourth one, the p-value of the test). I use the following two vectors

vligne = c(1,4),
vcolonne = c(1,4)

with rows and columns in the matrix (of course, the two should have the same length). Then, the good thing is that the printCoefmat function convert numerical values into characters (to have things that look like columns actually). So we simply have to remove numerical digits, and use squares instead

Cf2=Cf
  if(length(vligne)>0){  
    for(i in 1:length(vligne)){
      long = nchar(Cf[vligne[i],vcolonne[i]])
      Cf2[vligne[i],vcolonne[i]] = paste(rep("■",long),collapse = "")
    }}

Then, we print the updated version of the table

print.default(Cf2, quote = quote, right = right, na.print = na.print,...)

For example, here, it would be

my_summary(reg, vligne=c(1,4), vcolonne=c(1,4))
 
Call:
lm(formula = prestige ~ ., data = Prestige)
 
Residuals:
     Min       1Q   Median       3Q      Max 
-12.9863  -4.9813   0.6983   4.8690  19.2402 
 
Coefficients:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept) ■■■■■■■■■■  8.018e+00  -1.513  0.13380    
education    3.933e+00  6.535e-01   6.019 3.64e-08 ***
income       9.946e-04  2.601e-04   3.824  0.00024 ***
women        1.310e-02  3.019e-02   0.434  ■■■■■■■    
census       1.156e-03  6.183e-04   1.870  0.06471 .  
typeprof     1.077e+01  4.676e+00   2.303  0.02354 *  
typewc       2.877e-01  3.139e+00   0.092  0.92718    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 7.037 on 91 degrees of freedom
  (4 observations deleted due to missingness)
Multiple R-squared:  0.841,	Adjusted R-squared:  0.8306
F-statistic: 80.25 on 6 and 91 DF,  p-value: < 2.2e-16

Of course, it is hand made, I do not check for typos (like you should not ask to put squares in the seventh column), but that works well enough to generate random regressions in a quizz (or identical regressions on subsamples of a large dataset), and to hide values, in a quizz.

“Family History” and Life Insurance

Today and tomorrow, I will attend the Online International Conference in Actuarial science, data science and finance, organised by colleagues in Lyon. But I won’t be in Lyon, I will be at home, in Montréal…

I will give a talk on wednesday afternoon, on a joint paper with Ewen Gallic and Olivier Cabrignac. Slides are available here, and if I can get a copy of the video, I will share it…

Probabilistic Foundations of Econometrics, part 1

In a series of posts, I wanted to get into details of the history and foundations of econometric and machine learning models. It will be some sort of online version of our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics. This is the first one…

The importance of probabilistic models in economics is rooted in Working’s (1927) questions and the attempts to answer them in Tinbergen’s two volumes (1939). The latter have subsequently generated a great deal of work, as recalled by Duo (1993) in his book on the foundations of econometrics, and more particularly in the first chapter “The Probability Foundations of Econometrics”. It should be recalled that Trygve Haavelmo was awarded the Nobel Prize in Economics in 1989 for his “clarification of the foundations of the probabilistic theory of econometrics”. Because as Haavelmo (1944) (initiating a profound change in econometric theory in the 1930s, as recalled in Morgan’s Chapter 8 (1990)) showed, econometrics is fundamentally based on a probabilistic model, for two main reasons. First, the use of statistical quantities (or “measures”) such as means, standard errors and correlation coefficients for inferential purposes can only be justified if the process generating the data can be expressed in terms of a probabilistic model. Second, the probability approach is relatively general, and is particularly well suited to the analysis of “dependent” and “non-homogeneous” observations, as they are often found on economic data.We will then assume that there is a probabilistic space (\Omega,\mathcal{F},\mathbb{P}) such that observations (y_i,\mathbf{x}_i) are seen as realizations of random variables (Y_i, \mathbf{X}_i) . In practice, however, we are not very interested in the joint law of the couple (Y, \mathbf{X}) : the law of \mathbf{X} is unknown, and it is the law of Y conditional on \mathbf{X} that will be interested in. In the following, we will note x a single observation, \mathbf{x} a vector of observations, X a random variable, and \mathbf{X} a random vector. Abusively, \mathbf{X} may also designate the matrix of individual observations (denoted \mathbf{x}_i), depending on the context.

Foundations of mathematical statistics

As recalled in Vapnik’s (1998) introduction, inference in parametric statistics is based on the following belief: the statistician knows the problem to be analyzed well, in particular, he knows the physical law that generates the stochastic properties of the data, and the function to be found is written via a finite number of parameters[1]. To find these parameters, the maximum likelihood method is used. The purpose of the theory is to justify this approach (by discovering and describing its favorable properties). We will see that in learning, philosophy is very different, since we do not have a priori reliable information on the statistical law underlying the problem, nor even on the function we would like to approach (we will then propose methods to construct an approximation from the data at our disposal, as in (1998)). A “golden age” of parametric inference, from 1930 to 1960, laid the foundations for mathematical statistics, which can be found in all statistical textbooks, including today. As Vapnik (1998) states, the classical parametric paradigm is based on the following three beliefs:

  1. To find a functional relationship from the data, the statistician is able to define a set of functions, linear in their parameters, that contain a good approximation of the desired function. The number of parameters describing this set is small.
  2. The statistical law underlying the stochastic component of most real-life problems is the normal law. This belief has been supported by reference to the central limit theorem, which stipulates that under large conditions the sum of a large number of random variables is approximated by the normal law.
  3. The maximum likelihood method is a good tool for estimating parameters.

In this section we will come back to the construction of the econometric paradigm, directly inspired by that of classical inferential statistics.

Conditional laws and likelihood

Linear econometrics has been constructed under the assumption of individual data, which amounts to assuming independent variables (Y_i, \mathbf{X}_i) (if it is possible to imagine temporal observations – then we would have a process (Y_t, \mathbf{X}_t) – but we will not discuss time series here). More precisely, we will assume that, conditionally to the explanatory variables \mathbf{X}_i, the variables Y_i are independent. We will also assume that these conditional laws remain in the same parametric family, but that the parameter is a function of \mathbf{x}. In the Gaussian linear model it is assumed that: (Y\vert \mathbf{X}=\mathbf{x})\overset{\mathcal{L}}{\sim}\mathcal{N}(\mu(\mathbf{x}),\sigma^2)~~~~ (1)where \mu(\mathbf{x})=\beta_0+\mathbf{x}^T\mathbf{\beta} and \mathbf{\beta}\in\mathbb{R}^{p}.

It is usually called a ‘linear’ model since \mathbb{E}[Y\vert \mathbf{X}=\mathbf{x}]=\beta_0+\mathbf{x}^T\mathbf{\beta} is a linear combination of covariates[2]. It is said to be a homoscedastic model if Var[Y|\mathbf{X}=\mathbf{x}]=\sigma^2, where \sigma^2 is a positive constant. To estimate the parameters, the traditional approach is to use the Maximum Likelihood estimator, as initially suggested by Ronald Fisher. In the case of the Gaussian linear model, log-likelihood is written:  \log\mathcal{L}(\beta_0, \mathbf{\beta},\sigma^2\vert \mathbf{y},\mathbf{x}) = -\frac{n}{2}\log[2\pi\sigma^2] - \frac{1}{2\sigma^2}\sum_{i=1}^n (y_i-\beta_0-\mathbf{x}_i^T\mathbf{\beta})^2Note that the term on the right, measuring a distance between the data and the model, will be interpreted as deviance in generalized linear models. Then we will set: (\widehat{\beta}_0,\widehat{\mathbf{\beta}},\widehat{\sigma}^2)=\text{argmax}\left\lbrace\log\mathcal{L}(\beta_0, \mathbf{\beta},\sigma^2\vert \mathbf{y},\mathbf{x})\right\rbraceThe maximum likelihood estimator is obtained by minimizing the sum of the error squares (the so-called “least squares” estimator) that we will find in the “machine learning” approach.

The first order conditions allow to find the normal equations, whose matrix writing is \mathbf{X}^T[\mathbf{y}-\mathbf{X}\mathbf{\beta}]=\mathbf{0}, which can also be written (\mathbf{X}^T \mathbf{X})\mathbf{\beta}=\mathbf{X}^T \mathbf{y}. If \mathbf{X} is a full (column) rank matrix, then we find the classical estimator:\widehat{\mathbf{\beta}}=(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}=\mathbf{\beta}+(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^{-1}\mathbf{\varepsilon}~~~(2)using residual-based writing (as often in econometrics), y=\mathbf{x}^T\mathbf{\beta}+\varepsilon. Gauss Markov’s theorem ensures that this estimator is the unbiased linear estimator with minimum variance. It can then be shown that \widehat{\mathbf{\beta}}\sim\mathcal{N}(\mathbf{\beta},\sigma^2(\mathbf{X}^T\mathbf{X})^{-1}), and in particular, if we simply need the first two moments : \mathbb{E}[\widehat{\mathbf{\beta}}]=\mathbf{\beta}~~~Var[\widehat{\mathbf{\beta}}]=\sigma^2 [\mathbf{X}^T\mathbf{X}]^{-1}In fact, the normality hypothesis makes it possible to make a link with mathematical statistics, but it is possible to construct this estimator given by equation (2) without that Gaussian assumption. Hence, if we assume that Y|\mathbf{X} has the same distribution as \mathbf{x}^T\mathbf{\beta}+\varepsilon, where \mathbb{E}[\varepsilon]=0, Var[\varepsilon]=\sigma^2 and Cov[X_j,\varepsilon]=0 for all j, then \widehat{\mathbf{\beta}} is an unbiased estimator of \mathbf{\beta} with smallest variance[3] among unbiased linear estimators. Furthermore, if we cannot get normality at finite distance, asymptotically this estimator is Gaussian, with \sqrt{n}(\widehat{\mathbf{\beta}}-\mathbf{\beta})\overset{\mathcal{L}}{\rightarrow}\mathcal{N}(\mathbf{0},\mathbf{\Sigma})as n\rightarrow\infty, for some matrix \mathbf{\Sigma}.
The condition of having a full rank \mathbf{X} matrix can be (numerically) strong in large dimensions. If it is not satisfied, (\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T does not exist. If \mathbb{I} denotes the identity matrix, however, it should be noted that (\mathbf{X}^T \mathbf{X}+\lambda\mathbb{I})^{-1}\mathbf{X}^T still exists, whatever \lambda>0. This estimator is called the ridge estimator of level \lambda (introduced in the 1960s by Hoerl (1962), and associated with a regularization studied by Tikhonov (1963)). This estimator naturally appears in a Bayesian econometric context.

Residuals

It is not uncommon to introduce the linear model from the distribution of the residuals, as we mentioned earlier. Also, equation (1) is written as often: y_i=\beta_0+\mathbf{x}_i^T\mathbf{\beta}+\varepsilon_i~~~~(3)where \varepsilon_i’s are realizations of independent and identically distributed random variables (i.i.d.) from some \mathcal{N}(0,\sigma^2) distribution. With a vector notation, we will write \mathbf{\varepsilon}\overset{\mathcal{L}}{\sim}\mathcal{N}(\mathbf{0},\sigma^2\mathbb{I}) . The estimated residuals are defined as: \widehat{\varepsilon}_i =y_i-[\widehat{\beta}_0+\mathbf{x}_i^T\widehat{\mathbf{\beta}}] Those (estimated) residuals are basic tools for diagnosing the relevance of the model.

An extension of the model described by equation (1) has been proposed to take into account a possible heteroscedastic character: (Y\vert \mathbf{X}=\mathbf{x})\overset{\mathcal{L}}{\sim}\mathcal{N}(\mu(\mathbf{x}),\sigma^2(\mathbf{x}))where \sigma^2(\mathbf{x}) is a positive function of the explanatory variables. This model can be rewritten as: y_i=\beta_0+\mathbf{x}_i^T\mathbf{\beta}+\sigma^2(\mathbf{x}_i)\cdot\varepsilon_iwhere residuals are always i.i.d., with unit variance, \varepsilon_i=\frac{y_i-[\beta_0+\mathbf{x}_i^T\mathbf{\beta}]}{\sigma(\mathbf{x}_i)} While residuals based equations are popular in linear econometrics (when the dependent variable is continuous), it is no longer popular in counting models, or logistic regression.

However, writing using an error term (as in equation (3)) raises many questions about the representation of an economic relationship between two quantities. For example, it can be assumed that there is a relationship (linear to begin with) between the quantities of a traded good, q and its price p. This allows us to imagine a supply equationq_i=\beta_0+\beta_1 p_i+u_i(u_i being an error term) where the quantity sold depends on the price, but in an equally legitimate way, one can imagine that the price depends on the quantity produced (what one could call a demand equation), p_i=\alpha_0+\alpha_1 q_i+v_i(v_i denoting another error term). Historically, the error term in equation (3) could be interpreted as an idiosyncratic error on the variable y, the so-called explanatory variables being assumed to be fixed, but this interpretation often makes the link between an economic relationship and a complicated economic model difficult, the economic theory speaking abstractly about a relationship between a magnitude, the econometric model imposing a specific shape (what magnitude is y and what magnitude is x) as shown in more detail in Morgan (1990) Chapter 7.

(references mentioned above are online here). To be continued…

[1] This approach can be compared to structural econometrics, as presented for example in Kean (2010).

[2] Here, we will try to distinguish \beta_0, the intercept, and the other parameters \mathbf{\beta}, since they are considered differently in many extensions (e.g. regularization). Nevertheless, in many expressions \mathbf{\beta} will denote the joint vector (\beta_0, \mathbf{\beta}), for general formulas, to avoid too heavy notations.

[3] In the sense that the difference between variance matrices is a positive matrix.

Playing with robots

My son would be extremely proud if I tell him I can spend hours building robots. Well, my robots are not as fancy as Dr Tenma’s, but they usually do what I ask them to do. For instance, it is extremely simple to build a robot with R, to extract data from websites. I have mentioned it here (one tennis matches), but it failed there (on NY Marathon). To illustrate the use of robots, assume that one wants to build his own dataset to study prices of airline tickets. First, we have to choose a departure city (e.g. Paris) and an arrival city (e.g. Montreal). Then, one wants to look at all possible dates from April first (I ran it last month) till the end of December (so we create a vector with all leaving dates, namely a vector for the day, one for the month, and one for the year). Then, we choose a return date (say 3 days after).

DEP="Paris"
ARR="Montreal"
DATE1D=rep(c(1:30,1:31,1:30,1:31,1:31,1:30,1:31,1:30,
1:31,1:31,1:29),3)
DATE1M=rep(c(rep(4,30),rep(5,31),rep(6,30),rep(7,31),
rep(8,31),rep(9,30),rep(10,31),rep(11,30),rep(12,31),
rep(1,31),rep(2,29)),3)
DATE1Y=rep(c(rep(2011,30+31+30+31+31+30+31+
30+31+31+28),rep(2012,31+29)),3)
k=3
DATE3D=c((1+k):30,1:31,1:30,1:31,1:31,1:30,1:31,
1:30,1:31,1:31,1:29,1:k)
DATE3M=c(rep(4,30-k),rep(5,31),rep(6,30),rep(7,31),rep(8,31),
rep(9,30),rep(10,31),rep(11,30),rep(12,31),rep(1,31),rep(2,29),
rep(3,k))
DATE3Y=c(rep(2011,30+31+30+31+31+30+31+30+31+
31+28-k),re
p(2012,31+29+k))

It is also possible (for a nice robot), to skip all prior dates

skip=max(as.numeric(Sys.Date()-as.Date("2011-04-01")),1)

Then, we need a website where requests can be written nicely (with cities and dates appearing explicitly). Here, I cannot not mention the website that I used since it is stated on the website that it is strictly forbidden to run automatic requests… Anyway, consider a loop create a url address (actually I chose the value of the date randomly, since I had been told that those websites had memory: if you ask too many times for the same thing during a short period of time, prices would go up),

URL=paste("http://www.♦♦♦♦/dest.dll?qscr=fx&flag=q&city1=",
DEP,"&citd1=",ARR,"&",
"date1=",DATE1D[s],"/",DATE1M[s],"/",DATE1Y[s],
"&date2=",DATE3D[s],"/",DATE3M[s],"/",DATE3Y[s],
"&cADULT=1",sep="")

then, we just have to scan the webpage, looking for ticket prices (just looking for some specific names)

page=as.character(scan(URL,what="character"))
I=which(page%in%c("Price0","Price1","Price2"))
if(length(I)>0){
PRIX=substr(page[I+1],2,nchar(page[I+1]))
if(PRIX[1]=="1"){PRIX=paste(PRIX,page[I+2],sep="")}
if(PRIX[1]=="2"){PRIX=paste(PRIX,page[I+2],sep="")}

Here, we have to be a bit cautious, if prices exceed 1000. Then, it is possible to start a statistical study. For instance, if we compare to destination (from Paris), e.g. Montréal and New York, we obtain the following patterns (with high prices during holidays),

It is also possible to run the code twice (here it was run last month, and a couple of days ago), for the same destination (from Paris to Montréal),

Of course, it would be great if I could run that code say every week, to build up a nice dataset, and to study the dynamic of prices…

The problem is that it is forbidden to do this. In fact, on the website, it is mentioned that if we want to extract data (for an academic purpose), it is possible to ask for an extraction. But if we do tell that we study specific prices, data might be biased. So the good idea would be to use several servers, to make several requests, randomly, and to collect them (changing dates and destination). But here, my computing skills – unfortunately – reach a limit….