Tag Archives: R

Hidding values in the output of the summary function for a (linear) regression

Since our Fall 2020 session will be 100% online (and off-site), I have to work hard this summer to prepare online quizz and exams. I started intensively to play with Achim’s awesome r-exams package. But there are still a few things that I wanted to add, so I will post a series of posts on my blogs to keep tracks of updated functions I will write. Most of them a modification of R internal functions, so the code is hard to read. Here is the file, and I will update it frequently

url = "http://freakonometrics.free.fr/onlineExams.R"
source(url)

I have updated the summary function (more precisely the summary.lm function). To see how it works, consider a simple regression

library(car)
reg = lm(prestige ~ women, data=Prestige)
my_summary(reg)
 
Call:
lm(formula = prestige ~ women, data = Prestige)
 
Residuals:
    Min      1Q  Median      3Q     Max 
-33.444 -12.391  -4.126  13.034  39.185 
 
Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 48.69300    2.30760  21.101    2e-16 ***
women       -0.06417    0.05385  -1.192    0.236    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 17.17 on 100 degrees of freedom
Multiple R-squared:  0.014,	Adjusted R-squared:  0.004143
F-statistic:  1.42 on 1 and 100 DF,  p-value: 0.2362

A classical question I ask in my quizz is to hide the p-value of the F-test, and ask what it is (to make sure that students understand the equivalence between the F and the t test, in a simple regression). To hide the p-value, use

my_summary(reg, Fisher=TRUE)
 
Call:
lm(formula = prestige ~ women, data = Prestige)
 
Residuals:
    Min      1Q  Median      3Q     Max 
-33.444 -12.391  -4.126  13.034  39.185 
 
Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 48.69300    2.30760  21.101    2e-16 ***
women       -0.06417    0.05385  -1.192    0.236    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 17.17 on 100 degrees of freedom
Multiple R-squared:  0.014,	Adjusted R-squared:  0.004143
F-statistic:  1.42 on 1 and 100 DF,  p-value: ■■■■■

(and then, in a multiple choice exam, I ask if it is 1%, 5%, 12%, 23%, 47%, for example). That one was easy, since all those lines are based on the cat function, so I just modify it, if necessary

if(Fisher) cat("\nF-statistic:", formatC(x$fstatistic[1L], 
    digits = digits), "on", x$fstatistic[2L], "and", 
        x$fstatistic[3L], "DF,  p-value:", "■■■■■")
    if(!Fisher) cat("\nF-statistic:", formatC(x$fstatistic[1L], 
    digits = digits), "on", x$fstatistic[2L], "and", 
                   x$fstatistic[3L], "DF,  p-value:", format.pval(pf(x$fstatistic[1L], 
                   x$fstatistic[2L], x$fstatistic[3L], lower.tail = FALSE), 
                   digits = digits))

(here I use the unicode ‘black square‘ symbol to hide numbers). Of course, I can hide the value of \sigma, or the (adjusted or not) R ^2, etc.

Now, a little bit more tricky: what if we want to change the regression table, with the coefficients, their standard deviation, etc.  It is tricky since those values are numeric, with an appropriate format (not too many digits), but it can be done easily since that formating is done through the printCoefmat function.  So in my code, I have my internal function, where I ask to put some ‘black square‘ (and the good number to keep a readable format) at some specific locations. Consider a more complex regression

reg = lm(prestige ~ ., data=Prestige)

and assume that we want to hide the value of the intercet, \widehat{\beta}_0 (i.e. located at (1,1) in the matrix) and the p-value of the t-test for the fourth one (i.e. located at (4,4) in the matrix – since the first colum is \widehat{\beta}_3, the second one its standard deviation, the thirst one the t value, and then, the fourth one, the p-value of the test). I use the following two vectors

vligne = c(1,4),
vcolonne = c(1,4)

with rows and columns in the matrix (of course, the two should have the same length). Then, the good thing is that the printCoefmat function convert numerical values into characters (to have things that look like columns actually). So we simply have to remove numerical digits, and use squares instead

Cf2=Cf
  if(length(vligne)>0){  
    for(i in 1:length(vligne)){
      long = nchar(Cf[vligne[i],vcolonne[i]])
      Cf2[vligne[i],vcolonne[i]] = paste(rep("■",long),collapse = "")
    }}

Then, we print the updated version of the table

print.default(Cf2, quote = quote, right = right, na.print = na.print,...)

For example, here, it would be

my_summary(reg, vligne=c(1,4), vcolonne=c(1,4))
 
Call:
lm(formula = prestige ~ ., data = Prestige)
 
Residuals:
     Min       1Q   Median       3Q      Max 
-12.9863  -4.9813   0.6983   4.8690  19.2402 
 
Coefficients:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept) ■■■■■■■■■■  8.018e+00  -1.513  0.13380    
education    3.933e+00  6.535e-01   6.019 3.64e-08 ***
income       9.946e-04  2.601e-04   3.824  0.00024 ***
women        1.310e-02  3.019e-02   0.434  ■■■■■■■    
census       1.156e-03  6.183e-04   1.870  0.06471 .  
typeprof     1.077e+01  4.676e+00   2.303  0.02354 *  
typewc       2.877e-01  3.139e+00   0.092  0.92718    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 7.037 on 91 degrees of freedom
  (4 observations deleted due to missingness)
Multiple R-squared:  0.841,	Adjusted R-squared:  0.8306
F-statistic: 80.25 on 6 and 91 DF,  p-value: < 2.2e-16

Of course, it is hand made, I do not check for typos (like you should not ask to put squares in the seventh column), but that works well enough to generate random regressions in a quizz (or identical regressions on subsamples of a large dataset), and to hide values, in a quizz.

Des régressions en cascade

Cette fin de semaine, je mettais en ligne un court billet du deuxième effet kiss-cool où je rappelais que quand on fait une régression sur plusieurs variables explicatives corrélées, ce n’est pas équivalent à faire plusieurs régressions simple. C’est ce que disait le théorème de Frisch-Waugh, publié il y a près de 90 ans. Mais je glissais qu’il était possible de faire des régressions en cascade, sans aller beaucoup plus loin. Il faut relire l’article de Michael Lovell qui donner une interprétation géométrique de ce dessin. Mais programmons le, pour le voir… Considérons une base avec 3 variables explicatives, correspondant à des nombres d’incendies à Chicago

chicago=read.table("http://freakonometrics.free.fr/chicago.txt", header=TRUE,sep=";")

La régression multiple s’écrit iciy_i=\beta_0+\beta_1 x_{1,i}+\beta_2 x_{2,i}+\beta_3 x_{3,i}+\varepsilon_iOn peut estimer ces paramètres par moindre carrés,

reg=lm(Fire~.,data=chicago)
summary(reg)
 
Coefficients:
            Estimate Std. Error t value  Pr(|t|)    
(Intercept) 22.07525    6.19447   3.564 0.000910 ***
X_1         -0.62764    5.28130  -0.119 0.905953    
X_2          0.22378    0.06161   3.632 0.000744 ***
X_3         -1.55059    0.38195  -4.060 0.000204 ***

et on a alors la prévision suivante
\widehat{y}_i=\widehat{\beta}_0+\widehat{\beta}_1 x_{1,i}+\widehat{\beta}_2 x_{2,i}+\widehat{\beta}_3 x_{3,i}En fait, je peux même visualiser cette prévision.

BC = cbind(x0=coefficients(reg)[1],
           x1=coefficients(reg)[2]*chicago[,2],
           x2=coefficients(reg)[3]*chicago[,3],
           x3=coefficients(reg)[4]*chicago[,4],
           yp=predict(reg),y=chicago$Fire)
colr = 1:4
dessin_fire = function(i,BC=BC){
plot(0:3,0:3,xlim=c(0,35),col="white",ylim=c(-1,3),xlab="",ylab="",axes=FALSE)
abline(v=BC[i,1],lty=2)
rect(BC[i,1],3,BC[i,1]+BC[i,2],2,col=colr[1],border=NA)
arrows(BC[i,1],2.5,BC[i,1]+BC[i,2],2.5,lwd=2,length=.1,col="white")
rect(BC[i,1]+BC[i,2],2,BC[i,1]+BC[i,2]+BC[i,3],1,col=colr[2],border=NA)
arrows(BC[i,1]+BC[i,2],1.5,BC[i,1]+BC[i,2]+BC[i,3],1.5,lwd=2,length=.1,col="white") 
rect(BC[i,1]+BC[i,2]+BC[i,3],1,BC[i,1]+BC[i,2]+BC[i,3]+BC[i,4],0,col=colr[3],border=NA)
segments(BC[i,5],0,BC[i,5],1,lwd=3)
arrows(BC[i,1]+BC[i,2]+BC[i,3],.5,BC[i,1]+BC[i,2]+BC[i,3]+BC[i,4],.5,lwd=2,length=.1,col="white") 
abline(v=BC[i,1],lty=2)
rect(BC[i,5],-1,BC[i,6],0,col=colr[4],density=20,border=NA)
segments(BC[i,6],0,BC[i,6],-1,lwd=3,col=colr[4])
text(26,2.5,expression(X[1]),pos=4,col=colr[1])
text(26,1.5,expression(X[2]),pos=4,col=colr[2])
text(26,.5,expression(X[3]),pos=4,col=colr[3])
text(26,-.5,expression(epsilon),pos=4,col=colr[4])
axis(1)}

Par exemple, pour la douzième observation de ma base

dessin_fire(12)

on obtient

On lit de haut en bas : on commence par constante, vers 22. Puis on rajoute trois fois rien, parce que x_1 était non-significative dans notre régression. Puis on rajoute un petit quelque chose à cause de x_2, pour passer à 25, et x_3 nous fait plonger de plus de 20 points, pour finir vers 3.67.Le trait noir est la prévision que l’on obtient, à partir des trois variables. En bleu, on peut même voir l’erreur.

On peut noter que x_1 est non-significative, tout en étant relativement corrélée avec y

cor(chicago[,1:2])
          Fire       X_1
Fire 1.0000000 0.3773486
X_1  0.3773486 1.0000000

Car si x_1 est non-significative, c’est parce que cette variable est très corrélée avec une autre variable explicative. En fait, on peut faire une série de régressions en cascade, en commençant par x_1, et plus spécifiquement, on peut écrire\widehat{y}_i=\underbrace{\widehat{b}_0+\widehat{b}_1 x_{1,i}}_{(1)}+\underbrace{\widehat{b}_{0,2}+\widehat{b}_2 \tilde{x}_{2,i}}_{(2)}+\underbrace{\widehat{b}_{0,3}+\widehat{b}_3 \tilde{x}_{3,i}}_{(3)}où le premier terme (1) est obtenu en régression simplement y sur x_1 (oui, on fait juste une régression simple),  y_i=b_0+b_1 x_{1,i}+\eta_ipuis on apporte une petite correction, pour tenir compte de ce que nous apprend x_2 une fois la première régression effectuée. C’est exactement ce que raconte le théorème de Frish-Waugh : on va projeter y sur x_1^{\perp} (car c’est justement ce que x_1 n’explique pas), et x_2 sur x_1^{\perp}  et faire la régression de ces deux projections, soit, pour (2) \Pi_{x_1^{\perp}}y_i=b_{0,2}+b_2 \Pi_{x_1^{\perp}}x_{2,i}+\eta_{2,i}puis pour (3) \Pi_{(x_1,x_2)^{\perp}}y_i=b_{0,3}+b_3 \Pi_{(x_1,x_2)^{\perp}}x_{3,i}+\eta_{2,i}En terme informatique, cela donne

reg1=lm(Fire~X_1,data=chicago)
reg2=lm(residuals(lm(Fire~X_1,data=chicago))~residuals(lm(X_2~X_1,data=chicago)))
reg3=lm(residuals(lm(Fire~X_1+X_2,data=chicago))~residuals(lm(X_3~X_1+X_2,data=chicago)))

soit ici

BC123=cbind(x0=coefficients(reg1)[1],
            x1=coefficients(reg1)[2]*chicago[,2],
            x2=coefficients(reg2)[1]+coefficients(reg2)[2]*residuals(lm(X_2~X_1,data=chicago)),
            x3=coefficients(reg3)[1]+coefficients(reg3)[2]*residuals(lm(X_3~X_1+X_2,data=chicago)),
            yp=predict(reg),y=chicago$Fire)

En adaptant la fonction de dessin précédante on obtient

dessin_fire_123(12)

Autrement dit, on commence avec une régression simple, en rouge: on part d’une constante vers 3.5, puis on augemente notre prévision, en tenant compte de x_1. Puis on corrige avec x_2 (ou plutôt la partie de x_2 non expliquée par x_1). sur ce qui n’avait pas été expliqué auparavant, ce qui pousse à revoir un peu à la baisse notre prévision, puis on tient compte de x_3 (ou là encore, sa projection sur l’orthogonal de x_1 et x_2). On note que la prévision est très exactement la même qu’auparavant.

Mais on peut aller plus loin… pourquoi commencer par x_1? On peut aussi commencer par x_2. On considère alors\widehat{y}_i=\underbrace{\widehat{b}_0+\widehat{b}_2 x_{2,i}}_{(2)}+\underbrace{\widehat{b}_{0,1}+\widehat{b}_1 \tilde{x}_{1,i}}_{(1)}+\underbrace{\widehat{b}_{0,3}+\widehat{b}_3 \tilde{x}_{3,i}}_{(3)}où le premier terme (2) est obtenu en régression simplement y sur x_2y_i=b_0+b_2 x_{2,i}+\eta_ipuis on apporte une petite correction, pour tenir compte de ce que nous apprend x_1 une fois la première régression effectuée, (1) \Pi_{x_2^{\perp}}y_i=b_{0,1}+b_1\Pi_{x_2^{\perp}}x_{1,i}+\eta_{1,i}puis pour (3) \Pi_{(x_1,x_2)^{\perp}}y_i=b_{0,3}+b_3 \Pi_{(x_1,x_2)^{\perp}}x_{3,i}+\eta_{2,i} Soit

reg1=lm(Fire~X_2,data=chicago)
reg2=lm(residuals(lm(Fire~X_2,data=chicago))~residuals(lm(X_1~X_2,data=chicago)))
reg3=lm(residuals(lm(Fire~X_1+X_2,data=chicago))~residuals(lm(X_3~X_1+X_2,data=chicago)))
BC213=cbind(x0=coefficients(reg1)[1],
            x1=coefficients(reg1)[2]*chicago[,3],
            x2=coefficients(reg2)[1]+coefficients(reg2)[2]*residuals(lm(X_1~X_2,data=chicago)),
            x3=coefficients(reg3)[1]+coefficients(reg3)[2]*residuals(lm(X_3~X_1+X_2,data=chicago)),
            yp=predict(reg),y=chicago$Fire)

et visuellement, on obtient

dessin_fire_213(12)

Cette fois, la constante est un peu plus élevée, et on commence par faire une régression seulement avec x_2, pour arriver un peu en dessous de 10. Puis on corrige. On note que cette seconde correction nous ramène… très exactement comme dans le cas précédant. Mais si on pense en projection successives, on ne doit pas être surpris.

On peut récapituler avec une autre prévision, soit avec le modèle de régresion multiple (mais là encore, x_1 n’explique pas grand chose)

dessin_fire(15)

soit en commencant par une régression simple sur x_1 puis en faisant des régression en cascade,

dessin_fire_123(15)

soit, alternativement, en commencant par une régression simple sur x_2

dessin_fire_213(15)

Comme annoncé, cette trois approches sont équivalentes, et donnent très exactement la même prévision.

Prédire le nombre de morts, suite

Il y a quelques jours, un tweet de Baptiste Coulmont m’inspirait un billet sur l’estimation de la surmortalité, en France. Ce soir, c’est un tweet d’Alexandre Blanchet, qui parlait de surmortalité, au Québec.

J’étais surpris, parce que la prévision est bien au dessus des années passés contrairement à ce qu’on pouvait observer en France. En repartant des codes d’Alexandre (data_prep.R puis predictions.R) j’ai remis un peu en forme le graphique, pour avoir la vision de long-terme, pour comprendre la tendance linéaire utilisée par Alexandre (qui utilise un modèle à la Buys-Ballot)

Je me disais qu’on pouvait tenter des modèles de lissages exponentiel, à la Holt-Winters,

library(forecast)
x=ts(df.l_training$deces,start = 2010, frequency = 12)
x.hw <- ets(x, model = "AAA")
autoplot(forecast(x.hw,24))

en version additive, ou multiplicative

x.hw2 <- ets(x, model = "MAM")
autoplot(forecast(x.hw2,24))

Je laisse à d’autre le soin de comparer aux nombres de morts observés depuis 2 mois, je reste fasciné pour ma part par la tendance à la hausse de la mortalité, depuis 10 ans… Et j’essayerais de comparer ces trois modèles dès que j’aurais finis mes corrections de copies de la session d’hiver.

Estimer la surmortalité

Ce matin, Baptiste Coulmont m’envoyait un tweet avec un joli graphique, présentant le nombre quotidien de décès en France.

Comme les données sont en ligne, je me suis dit que je pourrais jouer un peu avec. Pour les plus curieux, on a la liste de tous les décès depuis… longtemps ! (on a plus de 25 millions de décès). La seule chose qui nous intéresse, c’est la date, alors on la récupère. Puis on compte, par jour, combien de décès on a eu,

D = read.csv("insee_deces.csv",header=TRUE)
Vecteur_Dates = as.character(D[,8])
TV = table(Vecteur_Dates)
B = data.frame(dateC=names(TV),nb=as.numeric(TV))
B$date=as.Date(B$dateC,"%Y-%m-%d")
B$year = format(B$date,"%Y")
B$month = format(B$date,"%m")
B$day = format(B$date,"%d")
B$deb=as.Date(paste(B$year,"01-01",sep="-"),"%Y-%m-%d")
B$fin=as.Date(paste(B$year,"12-31",sep="-"),"%Y-%m-%d")
B$dif=as.numeric(B$date-B$deb)/as.numeric(B$fin-B$deb)

Je crée ici une une variable qui me dit où je suis dans l’année en cours (0 pour le 1er janvier, et 1 pour pour 31 décembre). Ça me permet de contourner un petit problème technique : les années bissextiles. On peut ensuite représenter toutes les années,

plot(B2$dif[B2$year == "2020"],B2$NormCpte[B2$year == "2020"],type="l",xlim=0:1,ylim=c(1200,4000),col="white")
for(i in as.character(2000:2019)) lines(B2$dif[B2$year == i],B2$NormCpte[B2$year == i],col="light blue")

On veut ensuite avoir la tendance moyenne. Baptiste prenait la moyenne par jour (la courbe noire). Ici, on va lisser avec des splines

library(gam)
reg = gam(NormCpte~bs(dif,40),data=B2[B2$year != "2020",])
vx = seq(0,1,length=501)
vy = predict(reg,newdata=data.frame(dif=vx))
lines(vx,vy,col="blue",lwd=3)

ce qui donne la courbe bleue. C’est l’écart à cette courbe qui donne la surmortalité. Par exemple (prenons un exemple bien connu, et largement visible sur le graphique précédant) l’année 2003, et les quinze premiers jours d’août, correspondant (plus ou moins) à la période de canicule.

i = 2003
x = B2$dif[(B2$year == i)&((B2$month == "08"))&(B2$day %in% c(paste("0",1:9,sep=""),10:15))]
y = B2$NormCpte[(B2$year == i)&((B2$month == "08"))&(B2$day %in% c(paste("0",1:9,sep=""),10:15))]
yp = predict(reg,newdata=data.frame(dif=x))
e = y-yp
sum(e)
[1] 14294.39

Plusieurs chiffres sont mentionnés sur wikipédia, dont le rapport de l’INSERM qui annonçait 14,802 morts. On n’est pas trop loin… On peut visualiser cette surmortalité sur le graphique ci-dessous

plot(B2$dif[B2$year == "2020"],B2$NormCpte[B2$year == "2020"],type="l",col="white",xlim=0:1,ylim=c(1200,4000))
for(i in as.character(2000:2019)) lines(B2$dif[B2$year == i],B2$NormCpte[B2$year == i],col="light blue")
lines(vx,vy,col="blue")
lines(B2$dif[B2$year == 2003],B2$NormCpte[B2$year == 2003],col="red")
for(u in 1:length(x)) segments(x[u],y[u],x[u],yp[u],lwd=3,col="red")

Prenons un autre exemple qu’on voit bien sur le graphique de Baptiste, la grippe de 2017 (dite de 2016-2017). Si on prend les 45 premiers jours de 2017, on obtient les chiffres suivants

i = 2003
x=B2$dif[(B2$year == i)&((B2$month == "01")|
  ((B2$month == "02")&(B2$day %in% c(paste("0",1:9,sep=""),10:15))))]
y=B2$NormCpte[(B2$year == i)&((B2$month == "01")|
  ((B2$month == "02")&(B2$day %in% c(paste("0",1:9,sep=""),10:15))))]
yp=predict(reg,newdata=data.frame(dif=x))
yp = predict(reg,newdata=data.frame(dif=x))
e = y-yp
sum(e)
[1] 21177.33

Le rapport de Santé Publique France parle de 21,200 décès (pour la “surmortalité toute cause”) ce qui est là aussi comparable…

Maintenant, avant de conclure, et avant de me faire troller pendant quelques jours, je tiens à préciser que c’est juste un exercice de modélisation, et oui, j’ai exclu 2020. Je ne tiens pas à faire de comparaison avec la Covid-19, à qui 21,000 morts sont attribués, en France, ce soir. Si l’ordre de grandeur est le même, qu’on en me fasse pas dire qu’on est dans une situation comparable, et que la Covid-19 est finalement une simple grippe. Je ne me prononcerais pas sur ce point car je ne suis pas virologue. Mais surtout, on a 21,000 morts malgré des mesures incroyables de confinement. Je reviendrais probablement dans les jours qui viennent sur l’impact du contrôle par confinement, mais l’objet du court billet de ce soir était de voir comment on pouvait rapidement quantifier une surmortalité.

  • petite note technique : pour les 3 graphiques (mais pas pour les codes et les calculs) j’avais initialement normalisé les données. En effet, la population française a relativement pas mal changé en 20 ans, augmentant de 10%… Aussi, 1000 morts en 2003, ça n’est pas autant (relativement à la taille de la population) qu’en 2020.

Function basis and regression

In the first part of the course on linear models, we’ve seen how to construct a linear model when the vector of covariates \boldsymbol{x} is given, so that \mathbb{E}(Y|\boldsymbol{X}=\boldsymbol{x}) is either simply \boldsymbol{x}^\top\boldsymbol{\beta} (for standard linear models) or a functional of \boldsymbol{x}^\top\boldsymbol{\beta} (in GLMs). But more generally, we can consider transformations of the covariates, so that a linear model can be used. In a very general setting, consider \sum_{j=1}^m\beta_j h_j(\boldsymbol{x})with h_j:\mathbb{R}^p\rightarrow\mathbb{R}. The standard linear model is obtained when m=p and h_j(\boldsymbol{x})=x_j , but of course, much more general models can be obtained, for instance with h_k(\boldsymbol{x})=x_j^2 or h_k(\boldsymbol{x})=x_{j}x_{j'}, that could be used to achieve high-order Taylor expansions. In that case, we will obtain the polynomial regression, that we will discuss first. We might also think of piecewise constant functions, h_k(\boldsymbol{x})=\boldsymbol{1}(x_j\in [a,b]) , that could be related to regression trees (but that is not in the scope in the STT5100 course). And if we go on step futher, we might think of piecewise linear or piecewise polynomial function, possibly with additional continuity constraints, that will lead us to spline basis.

  • Polynomial regression

For pedagogical purpose, when I talk about polynomial regression, I always have in mind (in the univariate case) y=\beta_0+\beta_1x+\beta_2x^2+\cdots+\beta_kx^k+\varepsilonbut if we use

lm(y~poly(x,k))

in R, the output is not the \beta_j‘s.

As discussed in Kennedy & Gentle (1980) Statistical Computing,

Recall that orthogonal polynomials are defined with respect to the classical inner-product (on the finite interval (a,b)){\displaystyle \langle f,g\rangle =\int _{a}^{b}f(x)g(x)~\mathrm {d} x} And a sequence of orthogonal polynomials is (P_n) where P_n is a polynomial of degree n, for all n, and such that P_m\perp P_n for all m\neq n. Note that those polyomials are orthogonal with respect to the inner product defined above, i.e. given some finite interval (a,b). But if (a,b) changes, the polynomials will be different.

A popular family of orthogonal polynomial, on finite interval (-1,+1) is the family of Legendre polynomials, satisfying{\displaystyle \int _{-1}^{1}P_{m}(x)P_{n}(x)~\mathrm {d} x=0}as soon as m\neq n. Those polynomials satisfy Bonnet’s recursion formula{\displaystyle (n+1)P_{n+1}(x)=(2n+1)xP_{n}(x)-nP_{n-1}(x)} or Rodrigues’ formula {\displaystyle P_{n}(x)={\frac {1}{2^{n}n!}}{\frac {d^{n}}{dx^{n}}}(x^{2}-1)^{n}}The first values are here{\displaystyle P_{0}(x)=1} {\displaystyle P_{1}(x)=x}{\displaystyle P_{2}(x)={\frac {3x^{2}-1}{2}}}{\displaystyle P_{3}(x)={\frac {5x^{3}-3x}{2}}} {\displaystyle P_{4}(x)={\frac {35x^{4}-30x^{2}+3}{8}}}

Interestingly, we can get those polynomial functions using

library(orthopolynom)
(leg4coef = legendre.polynomials(n=4))
[[1]]
1 
 
[[2]]
x 
 
[[3]]
-0.5 + 1.5*x^2 
 
[[4]]
-1.5*x + 2.5*x^3 
 
[[5]]
0.375 - 3.75*x^2 + 4.375*x^4

Of course, there are many families of orthogonal polynomials (Jacobi polynomials, Laguerre polynomials, Hermite polynomials, etc). Now, in R, there is the standard poly function, that we use in polynomial regression.

x = seq(-1,1,length=101)
y = poly(x,4)
y
                   1            2             3            4
  [1,] -1.706475e-01  0.215984813 -2.480753e-01  0.270362873
  [2,] -1.672345e-01  0.203025724 -2.183063e-01  0.216290298
...
[100,]  1.672345e-01  0.203025724  2.183063e-01  0.216290298
[101,]  1.706475e-01  0.215984813  2.480753e-01  0.270362873
attr(,"coefs")
attr(,"coefs")$alpha
[1] 3.157229e-17 2.655145e-16 9.799244e-17 5.368224e-16
 
attr(,"coefs")$norm2
[1]   1.0000000 101.0000000  34.3400000   9.3377328   2.4472330   0.6330176
 
attr(,"degree")
[1] 1 2 3 4
attr(,"class")
[1] "poly"   "matrix"

But these are not Legendre polynomials… As explained in 李哲源‘s post on stackoverflow, the idea is to start with P_{-1}(x)=0, P_{0}(x)=1 and P_{1}(x)=x, and then define \ell_n=\langle P_n,P_n\rangle  as well as \alpha_n=\langle P_nP_1,P_1\rangle/\ell_n=\langle P_n^2,P_1\rangle/\ell_i= and \beta_n=\ell_n/\ell_{n-1}. Finally, define recursively{\displaystyle P_{n}(x)=(x-\alpha_{n-1})P_{n-1}(x)-\beta_{i-1}P_{i-2}(x)}and its normalized version, \tilde{P}_{n}=P_n/\sqrt{\ell_n}. That is what poly computes.

So, for pedagogical purpose, I said that I like to use y=\boldsymbol{x}^\top\boldsymbol{\beta}+\varepsilon where\boldsymbol{x}=(1,x,x^2,\cdots,xˆ{k-1},x^k)And actually, when using poly, we use the QR decomposition of that matrix. As discussed in in 李哲源‘s post, we can almost reproduce the poly function using

my_poly - function (x, degree = 1) {
    xbar = mean(x)
    x = x - xbar
    QR = qr(outer(x, 0:degree, "^"))
    X = qr.qy(QR, diag(diag(QR$qr), length(x), degree + 1))[, -1, drop = FALSE]
    X2 = X * X
    norm2 = colSums(X * X)   
    alpha = drop(crossprod(X2, x)) / norm2
    beta = norm2 / (c(length(x), norm2[-degree]))
    colnames(X) = 1:degree
    scale = sqrt(norm2)
    X = X * rep(1 / scale, each = length(x))
    X}

Nevertheless, the two models are equivalent. More precisely,

plot(cars)
reg1 = lm(dist~speed+I(speed^2)+I(speed^3),data=cars)
reg2 = lm(dist~poly(speed,3),data=cars)
u = seq(3,26,by=.1)
v1 = predict(reg1,newdata=data.frame(speed=u))
v2 = predict(reg2,newdata=data.frame(speed=u))
lines(u,v1,col="blue")
lines(u,v2,col="red",lty=2)

We have exactly the same prediction here

v1[u==15]
     121 
38.43919 
v2[u==15]
     121 
38.43919

And probably also quite interesting : the coefficients do not have the same interpretation (since we do not have the same basis), but the p-value for the highest degree is exactly the same here ! Here the two models reject, with the same confidence, the polynomial of degree three,

summary(reg1)
 
Coefficients:
             Estimate Std. Error t value Pr(>|t|)
(Intercept) -19.50505   28.40530  -0.687    0.496
speed         6.80111    6.80113   1.000    0.323
I(speed^2)   -0.34966    0.49988  -0.699    0.488
I(speed^3)    0.01025    0.01130   0.907    0.369
 
Residual standard error: 15.2 on 46 degrees of freedom
Multiple R-squared:  0.6732,	Adjusted R-squared:  0.6519 
F-statistic: 31.58 on 3 and 46 DF,  p-value: 3.074e-11
 
summary(reg2)
 
Coefficients:
                Estimate Std. Error t value Pr(>|t|)    
(Intercept)        42.98       2.15  19.988  < 2e-16 ***
poly(speed, 3)1   145.55      15.21   9.573  1.6e-12 ***
poly(speed, 3)2    23.00      15.21   1.512    0.137    
poly(speed, 3)3    13.80      15.21   0.907    0.369    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 15.2 on 46 degrees of freedom
Multiple R-squared:  0.6732,	Adjusted R-squared:  0.6519 
F-statistic: 31.58 on 3 and 46 DF,  p-value: 3.074e-11
  • B-splines regression (and GAMs)

Splines are also important in regression models, especially when we start talking about Generalized Additive Models. See Perperoglou, Sauerbrei, Abrahamowicz & Schmid (2019) for a review. In the univariate case, I introduce (linear) splines through positive parts, in the sense thaty=\beta_0+\beta_1x+\beta_2(x-s_1)_++\cdots+\beta_k(x-s_{k-1})_++\varepsilonwhere (x-s)_+ equals 0 if x<s and x-s if x>s. Those functions are nice since they are continuous, so the model is continuous (the weighted sum of continuous functions is continuous). And we can go one step further, with y=\beta_0+\beta_1x+\beta_2x^2+\beta_3(x-s_1)^2_++\cdots+\beta_k(x-s_{k-2})^2_++\varepsilonwith quadratic splines, or y=\beta_0+\beta_1x+\beta_2x^2+\beta_3x^3+\beta_4(x-s_1)^3_++\cdots+\beta_k(x-s_{k-3})^3_++\varepsilonfor cubic splines. Interestingly, quadratic splines are not only continuous, but their first derivative is also continuous (and the second one for cubic splines). So the knot discontinuity is s_1,s_2,\cdots is now invisible…

I like those models since they are easy to interprete. For example, the simple model \beta_1 x+\beta_2(x-s)_+ is the following piecewise linear function, continuous, with a “rupture” at knot s.

Observe also the following interpretation: for small values of x, there is a linear increase, with slope \beta_1, and for lager values of x, there is a linear decrease, with slope \beta_1+\beta_2. Hence, \beta_2 is interpreted as a change of the slope.

Unfortunately, it is now what R is using when using the bs function in R, which are the standard B-splines. Just to visualize (I will skip the maths here), with R, we have

library(splines)
clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02")
x = seq(5,25,by=.25)
B = bs(x,knots=c(10,20),Boundary.knots=c(5,55),degre=1)
matplot(x,B,type="l",lty=1,lwd=2,col=clr6)
B=bs(x,knots=c(10,20),Boundary.knots=c(5,55),degre=2)
matplot(x,B,type="l",col=clr6,lty=1,lwd=2)

while the functions I mentioned were (more or less) the following

pos = function(x,s) (x-s)*(x&gt;s)
par(mfrow=c(1,2))
clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02")
x = seq(5,25,by=.25)
B = cbind(pos(x,5),pos(x,10),pos(x,20))
matplot(x,B,type="l",lty=1,lwd=2,col=clr6)
pos2 = function(x,s) (x-s)^2*(x&gt;s)
B = cbind(pos(x,5)*20,pos2(x,5),pos2(x,10),pos2(x,20))
matplot(x,B,type="l",col=clr6,lty=1,lwd=2)

And as for the polynomial regression, the two models are equivalent. For example

plot(cars)
reg1 = lm(dist~speed+pos(speed,10)+pos(speed,20),data=cars)
reg2 = lm(dist~bs(speed,degree=1,knots=c(10,20)),data=cars)
v1 = predict(reg1,newdata=data.frame(speed=u))
v2 = predict(reg2,newdata=data.frame(speed=u))
lines(u,v1,col="blue")
lines(u,v2,col="red",lty=2)

or more specifically

v1[u==15]
     121 
39.35747 
v2[u==15]
     121 
39.35747

So one more time, the two models are equivalent, but I still find the approach with the positive part more intuitive, and easy to understand. As well as the interpretation of coefficients,

summary(reg1)
 
Coefficients:
               Estimate Std. Error t value Pr(&gt;|t|)  
(Intercept)     -7.6305    16.2941  -0.468   0.6418  
speed            3.0630     1.8238   1.679   0.0998 .
pos(speed, 10)   0.2087     2.2453   0.093   0.9263  
pos(speed, 20)   4.2812     2.2843   1.874   0.0673 .
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 15 on 46 degrees of freedom
Multiple R-squared:  0.6821,	Adjusted R-squared:  0.6613 
F-statistic: 32.89 on 3 and 46 DF,  p-value: 1.643e-11
 
summary(reg2)
 
Coefficients:
                                          Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)                                  4.621      9.344   0.495   0.6233    
bs(speed, degree = 1, knots = c(10, 20))1   18.378     10.943   1.679   0.0998 .  
bs(speed, degree = 1, knots = c(10, 20))2   51.094     10.040   5.089 6.51e-06 ***
bs(speed, degree = 1, knots = c(10, 20))3   88.859     12.047   7.376 2.49e-09 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 15 on 46 degrees of freedom
Multiple R-squared:  0.6821,	Adjusted R-squared:  0.6613 
F-statistic: 32.89 on 3 and 46 DF,  p-value: 1.643e-11

Here we can see directly that the first knot was not interesting (the slope did not change significantly) while the second one was…

Testing for a causal effect (with 2 time series)

A few days ago, I came back on a sentence I found (in a French newspaper), where someone was claiming that

“… an old variable explains 85% of the change in a new variable. So we can talk about causality”

and I tried to explain that it was just stupid : if we consider the regression of the temperature on day t+1 against the number of cyclist on day t, the R^2 exceeds 80%… but it is hard to claim that the number of cyclists on specific day will actually cause the temperature on the next day…

Nevertheless, that was frustrating, and I was wondering if there was a clever way to test for causality in that case. A popular one is Granger causality (I can mention a paper we published a few years ago where we use such a test, Tents, Tweets, and Events: The Interplay Between Ongoing Protests and Social Media). To explain that test, consider a bivariate time series (just like the one we have here), \boldsymbol{z}_t=(x_t,y_t), and consider some bivariate autoregressive model
{\displaystyle {\begin{bmatrix}x_{t}\\y_{t}\end{bmatrix}}={\begin{bmatrix}c_{1}\\c_{2}\end{bmatrix}}+{\begin{bmatrix}a_{1,1}&\textcolor{red}{a_{1,2}}\\\textcolor{blue}{a_{2,1}}&a_{2,2}\end{bmatrix}}{\begin{bmatrix}x_{t-1}\\y_{t-1}\end{bmatrix}}+{\begin{bmatrix}u_{t}\\v_{t}\end{bmatrix}}}where \boldsymbol{\varepsilon}_t=(u_t,v_t) is some bivariate white noise, in the sense that (i) {\displaystyle \mathbb{E} (\boldsymbol{\varepsilon}_{t})=\boldsymbol{0}} (the noise is centered) (ii) {\displaystyle \mathbb{E} (\boldsymbol{\varepsilon}_{t}\boldsymbol{\varepsilon}_{t}^\top)=\Omega } , so the variance matrix is constant, but possibly non-diagonal (iii) {\displaystyle \mathbb{E} (\boldsymbol{\varepsilon}_{t}\boldsymbol{\varepsilon}_{t-h}^\top)=\boldsymbol{0} } for all h\neq 0. Note that we can use the simplified expression{\displaystyle {\boldsymbol{z}_t=\boldsymbol{c}+\boldsymbol{A}\boldsymbol{z}_{t-1}+\boldsymbol{\varepsilon}_t}}Now, Granger test is based on several quantities. With off-diagonal terms of matrix \Omega, we have a so-called instantaneous causality, and since \Omega is symmetry, we will write x\leftrightarrow y. With off-diagonal terms of matrix \boldsymbol{A}, we have a so-called lagged causality, with either \textcolor{blue}{x\rightarrow y} or \textcolor{red}{x\leftarrow y} (and possibly both, if both terms are significant).

So I wanted to try on my two-variable problem.

df = read.csv("cyclistsTempHKI.csv")
dfts = cbind(C=ts(df$cyclists,start = c(2014, 1,2), frequency = 365),
             T=ts(df$meanTemp,start = c(2014, 1,2), frequency = 365))
library(vars)

I now have “time series” objects, and we can fit a VAR model,

var2 = VAR(dfts, p = 1, type = "const")
coefficients(var2)
$C
         Estimate   Std. Error   t value      Pr(&gt;|t|)
C.l1    0.8684009   0.02889424 30.054460 8.080226e-107
T.l1   70.3042012  20.07247411  3.502518  5.102094e-04
const 807.6394001 187.75472482  4.301566  2.110412e-05
 
$T
           Estimate   Std. Error   t value     Pr(&gt;|t|)
C.l1   0.0003865391 6.257596e-05  6.177118 1.540467e-09
T.l1   0.6611135594 4.347074e-02 15.208241 6.086394e-42
const -1.6413074565 4.066184e-01 -4.036481 6.446018e-05

For instant, we can run a causality, to test if the number of cyclists can cause the temperature (on the next day)

causality(var2, cause = "C")
$Granger
 
	Granger causality H0: C do not Granger-cause T
 
data:  VAR object var2
F-Test = 38.157, df1 = 1, df2 = 842, p-value = 1.015e-09

Here, we should clearly reject H_0, which is that there is no causal effect. Which is the way statisticians say that there should be some causal effect between the number of cyclist and the temperature…

So clearly, something is wrong here. Either it is some sort of superpower that cyclists are not aware of. Or this test that was used for forty years (Clive Granger even got a Nobel price for it) is not working. Or we missed something. Actually… I think we missed something here. Possibly because the series are not stationary. We can almost see it with

Phi = matrix(c(coefficients(var2)$C[1:2,1],coefficients(var2)$T[1:2,1]),2,2)
eigen(Phi)
eigen() decomposition
$values
[1] 0.9594810 0.5700335

where the highest eigenvalue is very close to one. But actually, we look here at the temperature…

plot(dfts)

so, at least, we should expect some seasonal unit root here. So let us use two techniques. The first one is a classical one-year difference, \Delta_{365}\boldsymbol{z}_t=\boldsymbol{z}_t-\boldsymbol{z}_{t-365}

var2 = VAR(diff(dfts,365), p = 1, type = "const")
coefficients(var2)
$C
          Estimate   Std. Error   t value     Pr(&gt;|t|)
C.l1     0.8376424   0.07259969 11.537823 1.993355e-16
T.l1    42.2638410  28.58783276  1.478386 1.449076e-01
const -507.5514795 219.40240747 -2.313336 2.440042e-02
 
$T
         Estimate   Std. Error   t value     Pr(&gt;|t|)
C.l1  0.000518209 0.0003277295 1.5812096 1.194623e-01
T.l1  0.598425288 0.1290511945 4.6371154 2.162476e-05
const 0.547828079 0.9904263469 0.5531235 5.823804e-01

The test on the fited VAR model yields

causality(var2, cause = "C") 
$Granger
 
	Granger causality H0: C do not Granger-cause T
 
data:  VAR object var2
F-Test = 2.5002, df1 = 1, df2 = 112, p-value = 0.1167

i.e., with a 11% p-value, we should reject the assumption that the number of cyclists cause the temperature (on the next day), and actually, we should also reject the other way

causality(var2, cause = "T") 
$Granger
 
	Granger causality H0: T do not Granger-cause C
 
data:  VAR object var2
F-Test = 2.1856, df1 = 1, df2 = 112, p-value = 0.1421

Nevertheless, if we look at the instantaneous causality, this one makes more sense

$Instant
 
	H0: No instantaneous causality between: T and C
 
data:  VAR object var2
Chi-squared = 13.081, df = 1, p-value = 0.0002982

The second idea would be to use a one day difference, \Delta_{1}\boldsymbol{z}_t=\boldsymbol{z}_t-\boldsymbol{z}_{t-1} and to fit a VAR model on that one

VARselect(diff(dfts,1), lag.max = 4, type="const")
$selection
AIC(n)  HQ(n)  SC(n) FPE(n) 
     3      3      2      3

but on that one, a VAR(1) model – with only one lag – might not be sufficient. It might be better to consider a VAR(3)

var2 = VAR(diff(dfts,1), p = 3, type = "const")

and on that one, one more time, we should reject the causal effect of the number of cyclists on the temperature (on the next day)

causality(var2, cause = "C")  
$Granger
 
	Granger causality H0: C do not Granger-cause T
 
data:  VAR object var2
F-Test = 0.67644, df1 = 3, df2 = 828, p-value = 0.5666

and this time, there could be a (lagged) causal effect of the temperature on the number of cyclists

causality(var2, cause = "T")  
$Granger
 
	Granger causality H0: T do not Granger-cause C
 
data:  VAR object var2
F-Test = 7.7981, df1 = 3, df2 = 828, p-value = 3.879e-05
 
$Instant
 
	H0: No instantaneous causality between: T and C
 
data:  VAR object var2
Chi-squared = 55.83, df = 1, p-value = 7.905e-14

but nothing instantaneously… So it looks like Granger causality performs well on that one !

Combiner les modalités d’une variable factorielle

Un billet rapide pour reprendre un point qu’on a vu ce matin en cours STT5100 pour illustrer le test de Fisher. On va utiliser les données de prix d’appartements en Pologne (données pas mal utilisées dans mon ébauche de notes de cours)

library(DALEX)
data(apartments)
with(data = apartments, boxplot(m2.price ~ district))

On a envie de faire ici des regroupements de modalités (c’est d’ailleurs suggéré par la régression simple, 5 variables explicatives étant ici non significatives). Pour mieux voir, on peut réordonner les modalités

A = with(data = apartments, aggregate(m2.price,by=list(district),FUN=mean))
A = A[order(A$x),]
L = as.character(A$Group.1)
apartments$district = factor(apartments$district, level=L)
with(data = apartments, boxplot(m2.price ~ district))

On va prendre ici le district le moins cher comme référence,

reg=lm(m2.price ~ district, data=apartments)
&gt; summary(reg)
 
Coefficients:
                    Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)          2968.36      58.02  51.160   &lt;2e-16 ***
districtBielany        17.38      84.16   0.207    0.836    
districtPraga          26.45      85.12   0.311    0.756    
districtUrsynow        42.01      82.65   0.508    0.611    
districtBemowo         80.10      83.71   0.957    0.339    
districtUrsus         102.01      82.25   1.240    0.215    
districtZoliborz      829.59      83.94   9.884   &lt;2e-16 ***
districtMokotow       887.10      81.86  10.837   &lt;2e-16 ***
districtOchota        987.93      84.16  11.738   &lt;2e-16 ***
districtSrodmiescie  2214.39      83.28  26.591   &lt;2e-16 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 597.4 on 990 degrees of freedom
Multiple R-squared:  0.5698,	Adjusted R-squared:  0.5659 
F-statistic: 145.7 on 9 and 990 DF,  p-value: &lt; 2.2e-16

On peut tester si les 5 premières modalités sont nulles, ce qui est un test multiple, et on va utiliser le test de Ficher :

library(car)
linearHypothesis(reg, c("districtBielany = 0", 
                        "districtPraga = 0",
                        "districtUrsynow = 0",
                        "districtBemowo = 0",
                        "districtUrsus = 0"))
Linear hypothesis test
 
Model 1: restricted model
Model 2: m2.price ~ district
 
  Res.Df       RSS Df Sum of Sq      F Pr(&gt;F)
1    995 354051715                           
2    990 353269202  5    782513 0.4386 0.8217

La statistique de Fisher est faible, et avec une p-value de 82%. On peut tenter le diable, et rajouter encore une modalité

library(car)
linearHypothesis(reg, c("districtBielany = 0", 
                        "districtPraga = 0",
                        "districtUrsynow = 0",
                        "districtBemowo = 0",
                        "districtUrsus = 0",
                        "districtZoliborz = 0"))
Linear hypothesis test
 
Model 1: restricted model
Model 2: m2.price ~ district
 
  Res.Df       RSS Df Sum of Sq      F    Pr(&gt;F)    
1    996 405455409                                  
2    990 353269202  6  52186207 24.374 &lt; 2.2e-16 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1

Mais on a peut être été trop gourmand cette fois. On va regrouper les 6 premières modalités (et appeler A le regroupement de districts). Si on regarde les prix moyens, par districts, on obtient

levels(apartments$district) = c(rep("A",6),levels(apartments$district)[7:11])
with(data = apartments, boxplot(m2.price ~ district))

apartments$district = relevel(apartments$district,"Zoliborz")

On recommence, en mettant le district le moins cher comme référence, et on veut tester si les deux suivants ont des coefficients nuls dans la régression linéaire.

reg=lm(m2.price ~ district, data=apartments)
linearHypothesis(reg, c("districtMokotow = 0",
                        "districtOchota = 0"))
Linear hypothesis test
 
Model 1: restricted model
Model 2: m2.price ~ district
 
  Res.Df       RSS Df Sum of Sq      F Pr(&gt;F)
1    997 355292524                           
2    995 354051715  2   1240809 1.7435 0.1754

Avec une p-value de 17%, on peut accepter de regrouper les trois modalités ensemble. On a alors trois groupes de districts, dont les noms sont A, B et C. On obtient les boîtes à moustaches suivantes

levels(apartments$district) = c("B","A",rep("B",2),"C")
apartments$district = relevel(apartments$district,"A")
with(data = apartments, boxplot(m2.price ~ district))

Je laisse les plus courageux vérifier, mais on a trois districts effectivement différents, et si le but est de prévoir le prix des logements, inutiles d’utiliser un découpage avec 10 modalités, un découpage avec 3 suffit !

On the conjugate function

In the MAT7381 course (graduate course on regression models), we will talk about optimization, and a classical tool is the so-called conjugate. Given a function f:\mathbb{R}^p\to\mathbb{R} its conjugate is function f^{\star}:\mathbb{R}^p\to\mathbb{R} such that f^{\star}(\boldsymbol{y})=\max_{\boldsymbol{x}}\lbrace\boldsymbol{x}^\top\boldsymbol{y}-f(\boldsymbol{x})\rbraceso, long story short, f^{\star}(\boldsymbol{y}) is the maximum gap between the linear function \boldsymbol{x}^\top\boldsymbol{y} and f(\boldsymbol{x}).

Just to visualize, consider a simple parabolic function (in dimension 1) f(x)=x^2/2, then f^{\star}(\color{blue}{2}) is the maximum gap between the line x\mapsto\color{blue}{2}x and function f(x).

x = seq(-100,100,length=6001)
f = function(x) x^2/2
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)

We can see it on the figure below.

viz = function(x0=1,YL=NA){
idx=which(abs(x)<=3) par(mfrow=c(1,2)) plot(x[idx],vf[idx],type="l",xlab="",ylab="",col="blue",lwd=2) abline(h=0,col="grey") abline(v=0,col="grey") idx2=which(x0*x>=vf)
polygon(c(x[idx2],rev(x[idx2])),c(vf[idx2],rev(x0*x[idx2])),col=rgb(0,1,0,.3),border=NA)
abline(a=0,b=x0,col="red")
i=which.max(x0*x-vf)
segments(x[i],x0*x[i],x[i],f(x[i]),lwd=3,col="red")
if(is.na(YL)) YL=range(vfstar[idx])
plot(x[idx],vfstar[idx],type="l",xlab="",ylab="",col="red",lwd=1,ylim=YL)
abline(h=0,col="grey")
abline(v=0,col="grey")
segments(x0,0,x0,fstar(x0),lwd=3,col="red")
points(x0,fstar(x0),pch=19,col="red")
}
viz(1)

or

viz(1.5)

In that case, we can actually compute f^{\star}, since f^{\star}(y)=\max_{x}\lbrace xy-f(x)\rbrace=\max_{x}\lbrace xy-x^2/2\rbraceThe first order condition is here x^{\star}=y and thusf^{\star}(y)=\max_{x}\lbrace xy-x^2/2\rbrace=\lbrace x^{\star}y-(x^{\star})^2/2\rbrace=\lbrace y^2-y^2/2\rbrace=y^2/2And actually, that can be related to two results. The first one is to observe that f(\boldsymbol{x})=\|\boldsymbol{x}\|_2^2/2 and in that case f^{\star}(\boldsymbol{y})=\|\boldsymbol{y}\|_2^2/2 from the following general result : if f(\boldsymbol{x})=\|\boldsymbol{x}\|_p^p/p with p>1, where \|\cdot\|_p denotes the standard \ell_p norm, then f^{\star}(\boldsymbol{y})=\|\boldsymbol{y}\|_q^q/q where\frac{1}{p}+\frac{1}{q}=1The second one is the conjugate of a quadratic function. More specifically if f(\boldsymbol{x})=\boldsymbol{x}^{\top}\boldsymbol{Q}\boldsymbol{x}/2 for some definite positive matrix \boldsymbol{Q}f^{\star}(\boldsymbol{y})=\boldsymbol{y}^{\top}\boldsymbol{Q}^{-1}\boldsymbol{y}/2. In our case, it was a univariate problem with \boldsymbol{Q}=1.

For the conjugate of the \ell_p norm, we can use the following code to visualize it

p = 3
f = function(x) abs(x)^p/p
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1.5)

or

p = 1.1
f = function(x) abs(x)^p/p
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1, YL=c(0,10))

Actually, in that case, we almost visualize that if f(x)=|x| then\displaystyle{f^{\star}\left(y\right)={\begin{cases}0,&\left|y\right|\leq 1\\\infty ,&\left|y\right|>1.\end{cases}}}

To conclude, another popular case, f(x)=\exp(x) then{\displaystyle f^{\star}\left(y\right)={\begin{cases}y\log(y)-y,&y>0\\0,&y=0\\\infty ,&y<0.\end{cases}}}We can visualize that case below

f = function(x) exp(x)
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1,YL=c(-3,3))

Combining automatically factor levels with trees

Last year, in a post, I discussed how to merge levels of factor variables, using combinatorial techniques (it was for my STT5100 cours, and trees are not in the syllabus), with an extension on trees at the end of the post.

consider the following (simulated dataset)

n=200
set.seed(1)
x1=runif(n)
x2=runif(n)
y=1+2*x1-x2+rnorm(n,0,.2)
LB=sample(LETTERS[1:10])
b=data.frame(y=y,x1=x1,
  x2=cut(x2,breaks=
  c(-1,.05,.1,.2,.35,.4,.55,.65,.8,.9,2),
  labels=LB))
str(b)
'data.frame':	200 obs. of  3 variables:
 $ y : num  1.345 1.863 1.946 2.481 0.765 ...
 $ x1: num  0.266 0.372 0.573 0.908 0.202 ...
 $ x2: Factor w/ 10 levels "I","A","H","F",..: 4 4 6 4 3 6 7 3 4 8 ...
table(b$x2)[LETTERS[1:10]]
 
 A  B  C  D  E  F  G  H  I  J 
11 12 23 34 23 36 12 32  3 14

Just by looking at the data (see the previous post), we could easily get the feeling that 10 levels was too much.

Following my post, Przemyslaw sent a comment suggesting to use

library(factorMerger)

It is indeed a nice package (unless you have really really big datasets with a lot of categories in your factor variables – as I experienced recently), and you can get great graphs

MF = mergeFactors(response = b$y, 
             factor = b$x2, 
             family = "gaussian")
plot(MF)

Here is suggests to create three categories. Recall that with student t-tests (changing the reference), we got

Another interesting package, by Piro Polo, is

library(tree.bins)

To use it, we simply call the following function, and we transform automatically our dataset : the continuous variables remain unchanged, and (possibly) categories of categorical variables are merged

b.bins = tree.bins(data=b, y=y)
str(b.bins)
Classes ‘data.table’ and 'data.frame':	200 obs. of  3 variables:
 $ y : num  1.345 1.863 1.946 2.481 0.765 ...
 $ x1: num  0.266 0.372 0.573 0.908 0.202 ...
 $ x2: chr  "Group.4" "Group.4" "Group.4" "Group.4" ...
 - attr(*, ".internal.selfref")= 
table(b.bins$x2)

Group.1 Group.2 Group.3 Group.4 
     23      35      26     116

here in four groups. To get the correspondance, use

tree.bins(data=b, y=y, return = "lkup.list")
[[1]]
   x2 Categories
1   E    Group.1
2   G    Group.2
3   C    Group.2
4   B    Group.3
5   J    Group.3
6   I    Group.4
7   A    Group.4
8   H    Group.4
9   F    Group.4
10  D    Group.4

(we have a list with one element, one dataframe, since there is only one factor variable). Cool, isn’t it ? I miss Przemyslaw’s plot, but this is rather quick, and efficient..

 

On leverage

Last week, in our STT5100 (applied linear models) class, I’ve introduce the hat matrix, and the notion of leverage. In a classical regression model, \boldsymbol{y}=\boldsymbol{X}\boldsymbol{\beta} (in a matrix form), the ordinary least square estimator of parameter \boldsymbol{\beta} is \widehat{\boldsymbol{\beta}}=(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top\boldsymbol{y}The prediction can then be written\widehat{\boldsymbol{y}}=\boldsymbol{X}\widehat{\boldsymbol{\beta}}=\underbrace{\color{blue}{\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top}}_{\color{blue}{\boldsymbol{H}}}\boldsymbol{y}where \color{blue}{\boldsymbol{H}} is called the hat matrix.

The matrix is idempotent, i.e. \boldsymbol{H}^2={\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\textcolor{grey}{\boldsymbol{X}^\top{\boldsymbol{X}}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}}\boldsymbol{X}^\top}={\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top}=\boldsymbol{H}so it can be interpreted as a projection matrix. Furthermore, since\boldsymbol{H}\boldsymbol{X}=\boldsymbol{X} (just do the maths), the projection is on a subspace that contains all the linear combinations of columns of \boldsymbol{X}. One can also observe that \mathbb{I}-\boldsymbol{H} is also a projection matrix. And we can write\boldsymbol{y}=\underbrace{\boldsymbol{H}\boldsymbol{y}}_{\widehat{\boldsymbol{y}}}+\underbrace{(\mathbb{I}-\boldsymbol{H})\boldsymbol{y}}_{\widehat{\boldsymbol{\varepsilon}}}where \widehat{\boldsymbol{y}} is the orthogonal projection of \boldsymbol{y} on the (linear) space of linear combinations of columns of \boldsymbol{X}, and \widehat{\boldsymbol{y}}\perp\widehat{\boldsymbol{\varepsilon}}, which gives the classical interpretation of residuals, being unpredictible (at least with a linear model using variables \boldsymbol{X}).

Let’s move a bit faster now (we’ve seen many other properties last week), and consider elements on the diagonal of matrix \boldsymbol{H}. Recall that we have

so entry \boldsymbol{H}_{i,i} is a measure of the influence of entry \boldsymbol{y}_i on its prediction latex]\widehat{\boldsymbol{y}}_i[/latex].

We have seen that\sum_{i=1}^n\boldsymbol{H}_{i,i}=\text{trace}(\boldsymbol{H})=\text{trace}(\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top)which can be written\sum_{i=1}^n\boldsymbol{H}_{i,i}=\text{trace}\boldsymbol{X}^\top(\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1})=\text{trace}(\mathbb{I})=pwhere classically p=k+1, where k is the number of explanatory variables. Further, since \boldsymbol{H} is idempotent, we can write (from \boldsymbol{H}=\boldsymbol{H}^2) that\boldsymbol{H}_{i,i}=\boldsymbol{H}_{i,i}^2 + \sum_{j\neq i}\boldsymbol{H}_{i,j}\boldsymbol{H}_{j,i}=\boldsymbol{H}_{i,i}^2 + \sum_{j\neq i}\boldsymbol{H}_{i,j}^2One the one hand, since the second term is positive \boldsymbol{H}_{i,i}\geq\boldsymbol{H}_{i,i}^2, i.e. 1\geq\boldsymbol{H}_{i,i}. And since both terms are positive, then \boldsymbol{H}_{i,i}\in[0,1]. And there was a question in the course on the sharpeness of the bounds.

Using Anscombe’s dataset, we’ve seen that it was possible to get a leverage of 1. Using something rather similar

df = data.frame(x = c(rep(1,10),6), y = c(1:10,8))
plot(df)

we obtain

model = lm(y~x,data=df)
abline(model,col="red",lwd=2)
H = lm.influence(model)$hat
plot(1:11,H,type="h")

The very last observation, the one one the right, is here extremely influencial : if we remove it, the model is completely different ! And here, we reach the upper bound, \boldsymbol{H}_{11,11}=1. Observe that all other points are equally influencial, and because on the constraint on the trace of the matrix, \boldsymbol{H}_{i,i}=1/10 when i\in\{1,2,\cdots,10\}.

Now, what about the lower bound ? In order to have some sort of “non-influencial” observations, consider the two following case.

  • the case where one observation (below the first one) is such that \widehat{\boldsymbol{y}}_{i}=\boldsymbol{y}_{i} (perfect prediction)
  • the case where one observation (below the tenth one) is such that \boldsymbol{x}_{i}=\overline{\boldsymbol{x}} and \boldsymbol{y}_{i}=\overline{\boldsymbol{y}} (from the first order condition – or normal equation), the fitted regression line always go through point (\overline{\boldsymbol{x}},\overline{\boldsymbol{y}})

Let us move two observations from our dataset,

mean(c(4,rep(1,8),6))
[1] 1.8
df = data.frame(x = c(4,rep(1,8),6,1.8),
y = c(predict(model,newdata=data.frame(x=4)),
2:9,8,
predict(model,newdata=data.frame(x=1.8))))

We now have

If we compute the leverages, we obtain

model = lm(y~x,data=df)
H = lm.influence(model)$hat
plot(1:11,H,type="h")

so, for the first observation, its leverage actually increased (the blue part), and for the tenth one, we have the lowest influence, but it is not zero. Is it possible to reach zero ?

Here, observe that for the tenth observation, \boldsymbol{H}_{i,i}=1/n. And actually, that’s the best we can do… We can prove that, in the case of a simple regression (as above)\boldsymbol{H}_{i,i}=\frac{1}{n}+\frac{(x_i-\overline{x})^2}{n\text{Var}(x)}which is minimum when x_i=\overline{x}, and then \boldsymbol{H}_{i,i}=1/n, otherwise \boldsymbol{H}_{i,i}>1/n. And this property is also valid in a multiple regression (as soon as an intercept is included in the regression – which should always be the case). To prove that result, let \tilde{\boldsymbol{X}} denote the matrix of centered variables \boldsymbol{X}, then we can prove that \boldsymbol{H}_{i,i}=\frac{1}{n}+\big[\tilde{\boldsymbol{X}}(\tilde{\boldsymbol{X}}^\top\tilde{\boldsymbol{X}})^{-1}\tilde{\boldsymbol{X}}^\top\big]_{i,i}(which is basically a matrix version of the previous equation).

I can maybe add another comment on Anscombe’s data. We’ve seen that on the right that we did reach 1. But I did not prove it. One way to prove it is actually to focus on the remaining n-1 points, on the left. Those have all the same x values. We can prove that if \boldsymbol{X}_{i_1}=\boldsymbol{X}_{i_2}, then \boldsymbol{H}_{i_1,i_2}=\boldsymbol{X}_{i_1}^\top(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}_{i_2}=\boldsymbol{H}_{i_1,i_1}hence, using the relationship obtained since the hat matrix is idempotent\boldsymbol{H}_{i_1,i_1}=2\boldsymbol{H}_{i_1,i_1}^2+\sum_{j\notin\{i_1,i_2\}}\boldsymbol{H}_{i_1,j}^2thus, we now have\boldsymbol{H}_{i_1,i_1}\big(1-2\boldsymbol{H}_{i_1,i_1}\big)>0i.e. \boldsymbol{H}_{i_1,i_1}\in[0,1/2], where the upper bound becomes 1/(n-1) “duplicates”. So for n-1 \boldsymbol{H}_{i,i}‘s, we have values below 1/(n-1), the last one should be below 1 and the sum has to be k=2 . So we have the value of the n \boldsymbol{H}_{i,i}‘s.