Since our Fall 2020 session will be 100% online (and off-site), I have to work hard this summer to prepare online quizz and exams. I started intensively to play with Achim’s awesome r-exams package. But there are still a few things that I wanted to add, so I will post a series of posts on my blogs to keep tracks of updated functions I will write. Most of them a modification of R internal functions, so the code is hard to read. Here is the file, and I will update it frequently
url = "http://freakonometrics.free.fr/onlineExams.R"
source(url) |
url = "http://freakonometrics.free.fr/onlineExams.R"
source(url)
I have updated the summary function (more precisely the summary.lm function). To see how it works, consider a simple regression
library(car)
reg = lm(prestige ~ women, data=Prestige)
my_summary(reg)
Call:
lm(formula = prestige ~ women, data = Prestige)
Residuals:
Min 1Q Median 3Q Max
-33.444 -12.391 -4.126 13.034 39.185
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 48.69300 2.30760 21.101 2e-16 ***
women -0.06417 0.05385 -1.192 0.236
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 17.17 on 100 degrees of freedom
Multiple R-squared: 0.014, Adjusted R-squared: 0.004143
F-statistic: 1.42 on 1 and 100 DF, p-value: 0.2362 |
library(car)
reg = lm(prestige ~ women, data=Prestige)
my_summary(reg)
Call:
lm(formula = prestige ~ women, data = Prestige)
Residuals:
Min 1Q Median 3Q Max
-33.444 -12.391 -4.126 13.034 39.185
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 48.69300 2.30760 21.101 2e-16 ***
women -0.06417 0.05385 -1.192 0.236
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 17.17 on 100 degrees of freedom
Multiple R-squared: 0.014, Adjusted R-squared: 0.004143
F-statistic: 1.42 on 1 and 100 DF, p-value: 0.2362
A classical question I ask in my quizz is to hide the p-value of the F-test, and ask what it is (to make sure that students understand the equivalence between the F and the t test, in a simple regression). To hide the p-value, use
my_summary(reg, Fisher=TRUE)
Call:
lm(formula = prestige ~ women, data = Prestige)
Residuals:
Min 1Q Median 3Q Max
-33.444 -12.391 -4.126 13.034 39.185
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 48.69300 2.30760 21.101 2e-16 ***
women -0.06417 0.05385 -1.192 0.236
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 17.17 on 100 degrees of freedom
Multiple R-squared: 0.014, Adjusted R-squared: 0.004143
F-statistic: 1.42 on 1 and 100 DF, p-value: ■■■■■ |
my_summary(reg, Fisher=TRUE)
Call:
lm(formula = prestige ~ women, data = Prestige)
Residuals:
Min 1Q Median 3Q Max
-33.444 -12.391 -4.126 13.034 39.185
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 48.69300 2.30760 21.101 2e-16 ***
women -0.06417 0.05385 -1.192 0.236
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 17.17 on 100 degrees of freedom
Multiple R-squared: 0.014, Adjusted R-squared: 0.004143
F-statistic: 1.42 on 1 and 100 DF, p-value: ■■■■■
(and then, in a multiple choice exam, I ask if it is 1%, 5%, 12%, 23%, 47%, for example). That one was easy, since all those lines are based on the cat function, so I just modify it, if necessary
if(Fisher) cat("\nF-statistic:", formatC(x$fstatistic[1L],
digits = digits), "on", x$fstatistic[2L], "and",
x$fstatistic[3L], "DF, p-value:", "■■■■■")
if(!Fisher) cat("\nF-statistic:", formatC(x$fstatistic[1L],
digits = digits), "on", x$fstatistic[2L], "and",
x$fstatistic[3L], "DF, p-value:", format.pval(pf(x$fstatistic[1L],
x$fstatistic[2L], x$fstatistic[3L], lower.tail = FALSE),
digits = digits)) |
if(Fisher) cat("\nF-statistic:", formatC(x$fstatistic[1L],
digits = digits), "on", x$fstatistic[2L], "and",
x$fstatistic[3L], "DF, p-value:", "■■■■■")
if(!Fisher) cat("\nF-statistic:", formatC(x$fstatistic[1L],
digits = digits), "on", x$fstatistic[2L], "and",
x$fstatistic[3L], "DF, p-value:", format.pval(pf(x$fstatistic[1L],
x$fstatistic[2L], x$fstatistic[3L], lower.tail = FALSE),
digits = digits))
(here I use the unicode ‘black square‘ symbol to hide numbers). Of course, I can hide the value of \sigma, or the (adjusted or not) R ^2, etc.
Now, a little bit more tricky: what if we want to change the regression table, with the coefficients, their standard deviation, etc. It is tricky since those values are numeric, with an appropriate format (not too many digits), but it can be done easily since that formating is done through the printCoefmat function. So in my code, I have my internal function, where I ask to put some ‘black square‘ (and the good number to keep a readable format) at some specific locations. Consider a more complex regression
reg = lm(prestige ~ ., data=Prestige) |
reg = lm(prestige ~ ., data=Prestige)
and assume that we want to hide the value of the intercet, \widehat{\beta}_0 (i.e. located at (1,1) in the matrix) and the p-value of the t-test for the fourth one (i.e. located at (4,4) in the matrix – since the first colum is \widehat{\beta}_3, the second one its standard deviation, the thirst one the t value, and then, the fourth one, the p-value of the test). I use the following two vectors
vligne = c(1,4),
vcolonne = c(1,4) |
vligne = c(1,4),
vcolonne = c(1,4)
with rows and columns in the matrix (of course, the two should have the same length). Then, the good thing is that the printCoefmat function convert numerical values into characters (to have things that look like columns actually). So we simply have to remove numerical digits, and use squares instead
Cf2=Cf
if(length(vligne)>0){
for(i in 1:length(vligne)){
long = nchar(Cf[vligne[i],vcolonne[i]])
Cf2[vligne[i],vcolonne[i]] = paste(rep("■",long),collapse = "")
}} |
Cf2=Cf
if(length(vligne)>0){
for(i in 1:length(vligne)){
long = nchar(Cf[vligne[i],vcolonne[i]])
Cf2[vligne[i],vcolonne[i]] = paste(rep("■",long),collapse = "")
}}
Then, we print the updated version of the table
print.default(Cf2, quote = quote, right = right, na.print = na.print,...) |
print.default(Cf2, quote = quote, right = right, na.print = na.print,...)
For example, here, it would be
my_summary(reg, vligne=c(1,4), vcolonne=c(1,4))
Call:
lm(formula = prestige ~ ., data = Prestige)
Residuals:
Min 1Q Median 3Q Max
-12.9863 -4.9813 0.6983 4.8690 19.2402
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) ■■■■■■■■■■ 8.018e+00 -1.513 0.13380
education 3.933e+00 6.535e-01 6.019 3.64e-08 ***
income 9.946e-04 2.601e-04 3.824 0.00024 ***
women 1.310e-02 3.019e-02 0.434 ■■■■■■■
census 1.156e-03 6.183e-04 1.870 0.06471 .
typeprof 1.077e+01 4.676e+00 2.303 0.02354 *
typewc 2.877e-01 3.139e+00 0.092 0.92718
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 7.037 on 91 degrees of freedom
(4 observations deleted due to missingness)
Multiple R-squared: 0.841, Adjusted R-squared: 0.8306
F-statistic: 80.25 on 6 and 91 DF, p-value: < 2.2e-16 |
my_summary(reg, vligne=c(1,4), vcolonne=c(1,4))
Call:
lm(formula = prestige ~ ., data = Prestige)
Residuals:
Min 1Q Median 3Q Max
-12.9863 -4.9813 0.6983 4.8690 19.2402
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) ■■■■■■■■■■ 8.018e+00 -1.513 0.13380
education 3.933e+00 6.535e-01 6.019 3.64e-08 ***
income 9.946e-04 2.601e-04 3.824 0.00024 ***
women 1.310e-02 3.019e-02 0.434 ■■■■■■■
census 1.156e-03 6.183e-04 1.870 0.06471 .
typeprof 1.077e+01 4.676e+00 2.303 0.02354 *
typewc 2.877e-01 3.139e+00 0.092 0.92718
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 7.037 on 91 degrees of freedom
(4 observations deleted due to missingness)
Multiple R-squared: 0.841, Adjusted R-squared: 0.8306
F-statistic: 80.25 on 6 and 91 DF, p-value: < 2.2e-16
Of course, it is hand made, I do not check for typos (like you should not ask to put squares in the seventh column), but that works well enough to generate random regressions in a quizz (or identical regressions on subsamples of a large dataset), and to hide values, in a quizz.