Tag Archives: Renglish

R0 and the exponential growth of a pandemic, an update

A few days ago, I wrote a blog post – R0 and the exponential growth of a pandemic – where I was trying to generate some visualization of some exponential growth, in the context of a pandemic. After giving some thoughts, the previous graph might not be the best one to see an exponential based contagion.

Having graphs evolving, from the left to the right, gives us the (false) idea of some temporal evolution. Which is no necessarily correct. It simply means that contaminated people will contaminate other people, and we look at the number of iterations here. So maybe some concentric dots would look better.

And from a technical perspective, what I did was fun, but probably too complicated. In my previous post, I wanted to pack optimally k identical disks intro a unit circle. On http://hydra.nat.uni-magdeburg.de/packing, it was possible to get the “best known packings of equal circles in a circle”, with the coordinates. But as we will see, we can use something much more simple here.

My idea is now to create some picture like one below, with concentric colored dot. In the center, we have the first people that were contaminated, and then, we can see the transmission, somehow

From a technical perspective, here, I use a different strategy. I decided to draw random points, uniformly. The problem with randomness is the natural high discrepancy, with monte carlo methods: it is very likely that some disks will overlap. It is not a major issue, but it might distort the message. So I decided to use some low-discrepancy sequences, such as Halton‘s sequence.

library(randtoolbox)
S = halton(n=5000, dim = 2)*2-1

Here, I have disk coordinates in [-1,+1]^2. Then, to get disks in a circle, I simply compute the distance to the origin (0,0),

D0 = S[,1]^2+S[,2]^2

and take the ranks. If I want to visualize k=200 people, I consider the 200 smaller ranks. To get concentric circles, each part having k_i individuals, I use as thresholds R_0^{\bar k_{i-1}},R_0^{\bar k_{i}},R_0^{\bar k_{i+1}}, etc, where \bar k_i=\bar k_{i-1}+k_i,

R0 = rank(D0,ties.method = "random")
C0 = as.numeric(cut(R0,c(0,cumsum(k)+.5)),100000)

where

R0=1.8
k=round(R0^(seq(1,9,by=2)))

Then we can plot the dots, with appropriate colors,

points(S,pch=19,col=colrpal[C0],cex=.75)

And of course, we can try that with different values, for R_0

R0=2.2
k=round(R0^(seq(1,9,by=2)))
kmax=max(k)  
S = halton(n=5000, dim = 2)*2-1
plot(S,col="light yellow",axes=FALSE,xlab="",ylab="",xlim=c(-1.3,1),ylim=c(-1,1),cex=.75,pch=19)
D0 = S[,1]^2+S[,2]^2
R0 = rank(D0,ties.method = "random")
C0 = as.numeric(cut(R0,c(0,cumsum(k)+.5)),100000)
points(S,pch=19,col=colrpal[C0],cex=.75)

R0 and the exponential growth of a pandemic

For some dissemination work, I want to create a nice graph to explain the exponential growth in pandemics, related to the value of R_0. Recall that R_0 corresponds to the average number of people that a contagious person can infect. Hence, with R_0=1.5, 4 people will contaminate 6 people, and those 6 will contaminate 9, etc. After n iteration, the number of contaminated people is simply R_0{}^n. As explained by Daniel Kahneman

people, certainly including myself, don’t seem to be able to think straight about exponential growth. What we see today are infections that occurred 2 or 3 weeks ago and the deaths today are people who got infected 4 or 5 weeks ago. All of this is I think beyond intuitive human comprehension

For different values of R_0 (on each row), I wanted to visualise the number of contaminated people after 3, 5 or 7 iterations, since graphs are usually the most simple way to give some intuition. The graph I had in mind was the following

(to be honest, I am quite sure I had seen it somewhere, but I cannot find where). The main challenge here is pack optimally k identical circles intro a unit circle: we need here the location of the points (center of the disks) and the radius. It seems to be a rather complicated mathematical problem. Nicely, on http://hydra.nat.uni-magdeburg.de/packing, it is possible to get the “best known packings of equal circles in a circle” (up to 5000, but many k‘s are missing). For instance, for k=37, we have

And interestingly, on the same website, we can get the coordinates of the centers, for example with 37 disks, so it is possible to recreate the R graph.

k = 37
base = read.table(paste("http://hydra.nat.uni-magdeburg.de/packing/cci/txt/cci",k,".txt",sep=""), header=FALSE)

The problem, as discussed earlier, is that some cases are not solved, yes, for instance k=2^{12}=4096: the next feasable case is 4105. To avoid that issue, one can use

T = "Error"
while(T == "Error"){
    T = substr(try(base = read.table(paste("http://hydra.nat.uni-magdeburg.de/packing/cci/txt/cci",k,".txt",sep=""), header=FALSE),silent = TRUE),1,5)
k=k+1
} 
k=k-1

Now we can almost plot it. The problem is that the radius of the circles is missing, here. But we can compute it

D=as.matrix(dist(x = base[,2:3]))
diag(D)=1e5
i=which(D == min(D), arr.ind = TRUE)
r = D[i[1,1],i[1,2]]

Here the radius is

r
[1] 0.2959118

To plot it, use

plot(base$V2,base$V3,xlim=c(-1,1),ylim=c(-1,1))
n=100
theta=seq(0,pi,length=n+1)
circ= function(x,y,r,h=1){
  vu=x+r*cos(theta)
  vv=r*sin(theta)
  cbind(c(vu,rev(vu))*h,c(y+vv,y-rev(vv))*h)
}
for(i in 1:k) polygon(circ(base[i,2],base[i,3],r/2*.95),col=colr,border=NA)

We can now use that code to create the graph above, with k=R_0{}^n for various values of n

And we can also use it to visualize more subtile differences, like R_0=1.1, R_0=1.3, R_0=1.5 and R_0=1.7

Hidding values in the output of the summary function for a (linear) regression

Since our Fall 2020 session will be 100% online (and off-site), I have to work hard this summer to prepare online quizz and exams. I started intensively to play with Achim’s awesome r-exams package. But there are still a few things that I wanted to add, so I will post a series of posts on my blogs to keep tracks of updated functions I will write. Most of them a modification of R internal functions, so the code is hard to read. Here is the file, and I will update it frequently

url = "http://freakonometrics.free.fr/onlineExams.R"
source(url)

I have updated the summary function (more precisely the summary.lm function). To see how it works, consider a simple regression

library(car)
reg = lm(prestige ~ women, data=Prestige)
my_summary(reg)
 
Call:
lm(formula = prestige ~ women, data = Prestige)
 
Residuals:
    Min      1Q  Median      3Q     Max 
-33.444 -12.391  -4.126  13.034  39.185 
 
Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 48.69300    2.30760  21.101    2e-16 ***
women       -0.06417    0.05385  -1.192    0.236    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 17.17 on 100 degrees of freedom
Multiple R-squared:  0.014,	Adjusted R-squared:  0.004143
F-statistic:  1.42 on 1 and 100 DF,  p-value: 0.2362

A classical question I ask in my quizz is to hide the p-value of the F-test, and ask what it is (to make sure that students understand the equivalence between the F and the t test, in a simple regression). To hide the p-value, use

my_summary(reg, Fisher=TRUE)
 
Call:
lm(formula = prestige ~ women, data = Prestige)
 
Residuals:
    Min      1Q  Median      3Q     Max 
-33.444 -12.391  -4.126  13.034  39.185 
 
Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 48.69300    2.30760  21.101    2e-16 ***
women       -0.06417    0.05385  -1.192    0.236    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 17.17 on 100 degrees of freedom
Multiple R-squared:  0.014,	Adjusted R-squared:  0.004143
F-statistic:  1.42 on 1 and 100 DF,  p-value: ■■■■■

(and then, in a multiple choice exam, I ask if it is 1%, 5%, 12%, 23%, 47%, for example). That one was easy, since all those lines are based on the cat function, so I just modify it, if necessary

if(Fisher) cat("\nF-statistic:", formatC(x$fstatistic[1L], 
    digits = digits), "on", x$fstatistic[2L], "and", 
        x$fstatistic[3L], "DF,  p-value:", "■■■■■")
    if(!Fisher) cat("\nF-statistic:", formatC(x$fstatistic[1L], 
    digits = digits), "on", x$fstatistic[2L], "and", 
                   x$fstatistic[3L], "DF,  p-value:", format.pval(pf(x$fstatistic[1L], 
                   x$fstatistic[2L], x$fstatistic[3L], lower.tail = FALSE), 
                   digits = digits))

(here I use the unicode ‘black square‘ symbol to hide numbers). Of course, I can hide the value of \sigma, or the (adjusted or not) R ^2, etc.

Now, a little bit more tricky: what if we want to change the regression table, with the coefficients, their standard deviation, etc.  It is tricky since those values are numeric, with an appropriate format (not too many digits), but it can be done easily since that formating is done through the printCoefmat function.  So in my code, I have my internal function, where I ask to put some ‘black square‘ (and the good number to keep a readable format) at some specific locations. Consider a more complex regression

reg = lm(prestige ~ ., data=Prestige)

and assume that we want to hide the value of the intercet, \widehat{\beta}_0 (i.e. located at (1,1) in the matrix) and the p-value of the t-test for the fourth one (i.e. located at (4,4) in the matrix – since the first colum is \widehat{\beta}_3, the second one its standard deviation, the thirst one the t value, and then, the fourth one, the p-value of the test). I use the following two vectors

vligne = c(1,4),
vcolonne = c(1,4)

with rows and columns in the matrix (of course, the two should have the same length). Then, the good thing is that the printCoefmat function convert numerical values into characters (to have things that look like columns actually). So we simply have to remove numerical digits, and use squares instead

Cf2=Cf
  if(length(vligne)>0){  
    for(i in 1:length(vligne)){
      long = nchar(Cf[vligne[i],vcolonne[i]])
      Cf2[vligne[i],vcolonne[i]] = paste(rep("■",long),collapse = "")
    }}

Then, we print the updated version of the table

print.default(Cf2, quote = quote, right = right, na.print = na.print,...)

For example, here, it would be

my_summary(reg, vligne=c(1,4), vcolonne=c(1,4))
 
Call:
lm(formula = prestige ~ ., data = Prestige)
 
Residuals:
     Min       1Q   Median       3Q      Max 
-12.9863  -4.9813   0.6983   4.8690  19.2402 
 
Coefficients:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept) ■■■■■■■■■■  8.018e+00  -1.513  0.13380    
education    3.933e+00  6.535e-01   6.019 3.64e-08 ***
income       9.946e-04  2.601e-04   3.824  0.00024 ***
women        1.310e-02  3.019e-02   0.434  ■■■■■■■    
census       1.156e-03  6.183e-04   1.870  0.06471 .  
typeprof     1.077e+01  4.676e+00   2.303  0.02354 *  
typewc       2.877e-01  3.139e+00   0.092  0.92718    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 7.037 on 91 degrees of freedom
  (4 observations deleted due to missingness)
Multiple R-squared:  0.841,	Adjusted R-squared:  0.8306
F-statistic: 80.25 on 6 and 91 DF,  p-value: < 2.2e-16

Of course, it is hand made, I do not check for typos (like you should not ask to put squares in the seventh column), but that works well enough to generate random regressions in a quizz (or identical regressions on subsamples of a large dataset), and to hide values, in a quizz.

Sharing pictures from holidays in the Canadian Rockies (with R)

My kids have a very popular blog (at least among their grandmothers) where they frequently post pictures from everyday’s life (since they live 5000km from them), as well as pictures taken from holidays. This afternoon, I tried to used the popupImage function from the leaflet package to post pictures, on a map (to explain where we spent our holiday this summer). This post is just to keep tracks of that code.

First, we need to load the appropriate R packages

library(leaflet)
library(mapview)

Then, we take a picture, and we locate it, for instance Mirror Lake (on the trail to Lake Agnes). Since leaflet uses openstreetmap, I recommend to use it also for location (and not google maps… coordinates can be slightly different)

df=data.frame(lat =51.41603, long=-116.23946,
nom = "Miror Lake",photo="http://freakonometrics.free.fr/jaspeR/_DSC5967.jpg")

I guess you can also use the metadata if you take pictures with a cell phone, and you add the location… but I am (very) old fashioned, and still use a camera to take pictures. Then you can add a dozen pictures

df=rbind(df, data.frame(lat =51.4164, long=-116.2442,
nom = "Lake Agnes",photo="http://freakonometrics.free.fr/jaspeR/_DSC6003.jpg"))
df=rbind(df, data.frame(lat =51.3215642,long=-116.193718,
nom="Moraine Lake",photo="http://freakonometrics.free.fr/jaspeR/_DSC5957.jpg"))

From that dataframe, we need two kinds of information: the location, and the url of the picture,

data_df=df[,c("lat","long")]
images = as.character(df$photo)

Then we can create the leaflet map (sorry for typos, but wordpress converts the > symbol into some “>” characters… which makes R pipe operator hard to read)

m = leaflet(data_df) %>%
  addTiles() %>%
  addCircleMarkers(
    fillOpacity = 0.8, radius = 5,
    lng = ~long, lat =~lat, 
    popup = popupImage(images)
  )

and export it (in a nice html file)

library(htmlwidgets)
saveWidget(m, file="jaspR.html")

Regression discontinuity model for TV series

In September, we are usually happy to see our favorite TV series back on air… Or not? Because, admit it, if we are happy to see those characters back, most of the time, we are disappointed, too. So why not look at the data, to confirm this feeling? Nazareno Andrade shared some nice codes to get IMDB ratings in a nice csv file (you can either use the large csv file, or run your own codes)

download.file("https://github.com/nazareno/imdb-series/raw/master/data/series_from_imdb.csv",
destfile="series_from_imdb.csv")
base = read.csv("series_from_imdb.csv")

It is a large dataset, with more than 64,000 episodes of almost 890 TV series,

str(base)
'data.frame':	64018 obs. of  18 variables:
 $ series_name: Factor w/ 889 levels "'Allo 'Allo!",..: 137 137 137 137 137 137 137 137 137 137 ...
 $ episode    : Factor w/ 54090 levels "-30-","¡Viva los muertos!",..: 32314 7446 16 7176 17748 9562 1379 36218 17845 5553 ...
 $ series_ep  : int  1 2 3 4 5 6 7 8 9 10 ...
 $ season     : int  1 1 1 1 1 1 1 2 2 2 ...
 $ season_ep  : int  1 2 3 4 5 6 7 1 2 3 ...
 $ user_rating: num  8.9 8.7 8.7 8.2 8.3 9.2 8.8 8.7 9.2 8.3 ...

Just pick a TV series, for instance Dan Harmon’s Community,

sbase = base[base$series_name=="Community",]

We can plot the evolution of the rating over the 110 episodes.

sbase=sbase[!duplicated(sbase[,c(1,2,4,5)]),]
sbase$series_ep=1:nrow(sbase)

()since there could be some problem with the data (such as duplicates, let us clean it quickly)

plot(sbase$series_ep,sbase$UserRating,xlab=sbase$series_name[1])
idx=c(0,which(diff(sbase$season)!=0),nrow(sbase))
abline(v=idx+.5,lty=2,col=colr[2])
a = unique(sbase$season)
for(u in a){
  ssbase = sbase[sbase$season==u,]
  reg = lm(UserRating~series_ep,data=ssbase)
  lines(ssbase$series_ep,predict(reg),col=colr[3],lwd=2)
}

The vertical lines are here to visualize the seasons. On issue is that the lenght can vary with time. Consider Linwood Boomer’s Malcom in The Middle,

sbase = base[base$series_name=="Malcolm in the Middle",]

or Craig Thomas and Carter Bays’s How I Met Your Mother,

sbase = base[base$series_name=="How I Met Your Mother",]

On those two, the evolution is rather stable. Look at AMC’s The Walking Dead,

sbase = base[base$series_name=="The Walking Dead",]

Now, look at Howard Gordon and Alex Gansa’s Homeland,

sbase = base[base$series_name=="Homeland",]

There is an issue here with the last episode of season4, “Long Time Coming“, that has a very poor rating. If we remove that point, we get the thin line. Note that the regression line is always increasing. For Michael Hirst’s Vickings, we have

sbase = base[base$series_name=="Vicking",]

If we look more carefully on the previous graph, for five seasons (out of six), we have a positive slope. Well, to be honest, it is not significantly positive most of the time, but still. Out of 80 shows, and a total of 583 seasons, the slope is postive 75% of the time (433) and negative 25% of the time (150).

BASE = NULL
L80 = unique(base$series_name)
for(j in 1:length(L)){
sbase=base[base$series_name==L[j],]
sbase=sbase[!duplicated(sbase[,c(1,2,4,5)]),]
sbase=sbase[sbase$season>0,]
sbase$series_ep=1:nrow(sbase)
a=unique(sbase$season)
a=a[!is.na(a)]
for(u in a){
  ssbase=sbase[sbase$season==u,]
  reg=lm(UserRating~series_ep,data=ssbase)
  pente = NA
  if((!is.na(coefficients(reg)[2]))&(!is.na((summary(reg)$coefficients[2,4])))){
  if((summary(reg)$coefficients[2,4]<.05)&(coefficients(reg)[2]>0)) pente="positive"
  if((summary(reg)$coefficients[2,4]<.05)&(coefficients(reg)[2]<0)) pente="negative" sdf=data.frame(nom=sbase$series_name[1],season=u,slope=coefficients(reg)[2],inf=confint(reg)[2,1],sup=confint(reg)[2,2],signe=pente) BASE=rbind(BASE,sdf)} }} str(BASE) 'data.frame': 583 obs. of 6 variables: $ nom : Factor w/ 80 levels "Friends","Game of Thrones",..: 1 1 1 1 1 1 1 1 1 1 ... 
 
mean(BASE$slope>0)
[1] 0.7427101
table(BASE$signe)
negative positive 
      15      144

Most of the time, the slope is not significant. To be more specific, 72% of the time, the slope is not significant. But when it is, 90% of the time, it is positive (144 seasons). Let us look at other TV series, for instance Joel Surnow and Robert Cochran’s 24,

sbase = base[base$series_name=="24",]

Álex Pina’s La Casa de Papel,

sbase = base[base$series_name=="La Casa de Papel",]

Steven Knight’s Peaky Blinders,

sbase = base[base$series_name=="Peaky Blinders",]

or David Simon’s The Wire,

sbase = base[base$series_name=="The Wire",]

The slope is increasing over almost all seasons. But a major drawback is that when we get back to our show, for a new season, we usually get disapointed. More specifically, we can quantify the difference in red below

that can be estimated using

sbase12 = sbase[sbase$season%in%c(a[ij],a[ij+1]),]
seuil = sbase12$series_ep[which(diff(sbase12$season)!=0)]+.5
s = function(x) (x-seuil)*(x>seuil)
reg = lm(UserRating~series_ep+s(series_ep)+I(series_ep>seuil),data=sbase12)

Here we have

summary(reg)
Coefficients:
                         Estimate Std. Error t value  Pr(|t|)    
(Intercept)               8.45000    0.16338  51.719    2e-16 ***
series_ep                 0.10000    0.03235   3.091 0.008598 ** 
s(series_ep)              0.02000    0.04218   0.474 0.643291    
I(series_ep)TRUE.        -1.01778    0.20486  -4.968 0.000257 ***

so the drop of 1 point (out of 10) cannot be claimed as being significant. That is the idea of regression discontinuity.

If we loop again over all our series, we have 485 pairs of consecutive seasons. As expected, in 75% of the casse, from season t-1 to season t, we observe a negative rupture. As previously, in 70% of the cases, it is not significat (with linear models before and after), and when it is significant, it is negative in 96% of the cases ! But an alternative can be to use nonparametric models, on both sides.

To illustrate, consider David Benioff and D. B. Weiss’s Game of Thrones,

sbase = base[base$series_name=="Game of Thrones",]

but let us remove the last season (no spoiler here, but clearly not worst watching)

Consider for instance the drop between season 1 and season 2,

library(rdd)
sbase12=sbase[sbase$season%in%c(1,2),]
lmr=RDestimate(UserRating~series_ep,data=sbase12,cutpoint=mean(range(sbase12$series_ep)))
plot(lmr)

This is very consistent with what we observed with our linear regressions actually,

seuil=10.5
s = function(x) (x-seuil)*(x>seuil)
reg = lm(UserRating~series_ep+s(series_ep)+I(series_ep>seuil),data=sbase12)
summary(reg)
 
Coefficients:
                         Estimate Std. Error t value  Pr(|t|)    
(Intercept)               8.70000    0.15458  56.281    2e-16 ***
series_ep                 0.07273    0.02491   2.919  0.01003 *  
s(series_ep)              0.01455    0.03523   0.413  0.68520    
I(series_ep)TRUE         -0.94000    0.20316  -4.627  0.00028 ***

Here, the drop of one point is significant…

So, your favorite show had an outstanding finale ? and you can’t wait to watch the new season… Well, statistically, it’s very likely that you will be disapointed by the first episode of the forthcoming season…

Testing for Covid-19 in the U.S.

For almost a month, on a daily basis, we are working with colleagues (Romuald, Chi and Mathieu) on modeling the dynamics of the recent pandemic. I learn of lot of things discussing with them, but we keep struggling with the tests. Paul, in Montréal, helped me a little bit, but I think we will still have to more to get a better understand. To but honest, we stuggle with two very simple questions

  • how many people are tested on a daily basis ?

Recently, I discovered Modelling COVID-19 exit strategies for policy makers in the United Kingdom, which is very close to what we try to do… and in the document two interesting scenarios are discussed, with, for the first one, “1 million ‘reliable’ daily tests are deployed” (in the U.K.) and “5 million ‘useless’ daily tests are deployed”. There are about 65 millions unhabitants in the U.K. so we talk here about 1.5% people tested, on a daily basis, or 7.69% people ! It could make sense, but our question was, at some point, is that realistic ? where are we today with testing ? In the U.S. https://covidtracking.com/ collects interesting data, on a daily basis, per state.

url = "https://raw.githubusercontent.com/COVID19Tracking/covid-tracking-data/master/data/states_daily_4pm_et.csv"
download.file(url,destfile="covid.csv")
base = read.csv("covid.csv")

Unfortunately, there is no information about the population. That we can find on wikipedia. But in that table, the state is given by its full name (and the symbol in the previous dataset). So we new also to match the two datasets properly,

url="https://en.wikipedia.org/wiki/List_of_states_and_territories_of_the_United_States_by_population"
download.file(url,destfile = "popUS.html")
#pas contaminé 2/3 R=3
library(XML)
tables=readHTMLTable("popUS.html")
T=tables[[1]][3:54,c("V3","V4")]
names(T)=c("state","pop")
url="https://en.wikipedia.org/wiki/List_of_U.S._state_abbreviations"
download.file(url,destfile = "nameUS.html")
tables=readHTMLTable("nameUS.html")
T2=tables[[1]][13:63,c(1,4)]
names(T2)=c("state","symbol")
T=merge(T,T2)
T$population = as.numeric(gsub(",", "", T$pop, fixed = TRUE))
names(base)[2]="symbol"
base = merge(base,T[,c("symbol","population")])

Now our dataset is fine… and we can get a function to plot the number of people tested in the U.S. (cumulated). Here, we distinguish between the positive and the negative,

drawing = function(st ="NY"){
sbase=base[base$symbol==st,c("date","positive","negative","population")]
sbase$DATE = as.Date(as.character(sbase$date),"%Y%m%d")
sbase=sbase[order(sbase$DATE),]
par(mfrow=c(1,2))
plot(sbase$DATE,(sbase$positive+sbase$negative)/sbase$population,ylab="Proportion Test (/population of state)",type="l",xlab="",col="blue",lwd=3)
lines(sbase$DATE,sbase$positive/sbase$population,col="red",lwd=2)
legend("topleft",c("negative","positive"),lwd=2,col=c("blue","red"),bty="n")
title(st)
plot(sbase$DATE,sbase$positive/(sbase$positive+sbase$negative),ylab="Ratio of positive tests",ylim=c(0,1),type="l",xlab="",col="black",lwd=3)
title(st)}

Let us start with New York

drawing("NY")

As at now, 4% of the entiere population got tested… over 6 weeks…. The graph on the right is the proportion of people who tested positive… I won’t get back on that one here today, I keep it for our work. In New Jersey, we got about 2.5% of the entiere population tested, overall,

drawing("NJ")

Let us try a last one, Florida

drawing("FL")

As at today, it is 1.5% of the population, over 6 weeks. Overall, in the U.S. less than 0.1% people are tested, on a daily basis. Which is far from the 1.5% in the U.K. scenarios. Now, here come the second question,

  • what are we actually testing for ?

On that one, my experience in biology is… very limited, and Paul helped me. He mentioned this morning a nice report, from a lab in UC Berkeley

One of my question was for instance, if you get tested positive, and you do it again, can you test negative ? Or, in the context of our data, do we test different people ? are some people tested on a regular basis (perhaps every week) ? For instance, with antigen tests (Reverse Transcription Quantitative Polymerase Chain Reaction (RT-qPCR) – also called molecular or PCR – Polymerase Chain Reaction – test) we test if someone is infectious, while with antibody test (using serological immunoassays that detect viral-specific antibodies — Immunoglobin M (IgM) and G (IgG) — also called serology test), we test for immunity. Which is rather different…

I have no idea what we have in our database, to be honest… and for the past six weeks, I have seen a lot of databases, and most of the time, I don’t know how to interpret, I don’t know what is measured… and it is scary. So, so far, we try to do some maths, to test dynamics by tuning parameters “the best we can” (and not estimate them). But if anyone has good references on testing, in the context of Covid-19 (for instance on specificity, sensitivity of all those tests) I would love to hear about it !

Function basis and regression

In the first part of the course on linear models, we’ve seen how to construct a linear model when the vector of covariates \boldsymbol{x} is given, so that \mathbb{E}(Y|\boldsymbol{X}=\boldsymbol{x}) is either simply \boldsymbol{x}^\top\boldsymbol{\beta} (for standard linear models) or a functional of \boldsymbol{x}^\top\boldsymbol{\beta} (in GLMs). But more generally, we can consider transformations of the covariates, so that a linear model can be used. In a very general setting, consider \sum_{j=1}^m\beta_j h_j(\boldsymbol{x})with h_j:\mathbb{R}^p\rightarrow\mathbb{R}. The standard linear model is obtained when m=p and h_j(\boldsymbol{x})=x_j , but of course, much more general models can be obtained, for instance with h_k(\boldsymbol{x})=x_j^2 or h_k(\boldsymbol{x})=x_{j}x_{j'}, that could be used to achieve high-order Taylor expansions. In that case, we will obtain the polynomial regression, that we will discuss first. We might also think of piecewise constant functions, h_k(\boldsymbol{x})=\boldsymbol{1}(x_j\in [a,b]) , that could be related to regression trees (but that is not in the scope in the STT5100 course). And if we go on step futher, we might think of piecewise linear or piecewise polynomial function, possibly with additional continuity constraints, that will lead us to spline basis.

  • Polynomial regression

For pedagogical purpose, when I talk about polynomial regression, I always have in mind (in the univariate case) y=\beta_0+\beta_1x+\beta_2x^2+\cdots+\beta_kx^k+\varepsilonbut if we use

lm(y~poly(x,k))

in R, the output is not the \beta_j‘s.

As discussed in Kennedy & Gentle (1980) Statistical Computing,

Recall that orthogonal polynomials are defined with respect to the classical inner-product (on the finite interval (a,b)){\displaystyle \langle f,g\rangle =\int _{a}^{b}f(x)g(x)~\mathrm {d} x} And a sequence of orthogonal polynomials is (P_n) where P_n is a polynomial of degree n, for all n, and such that P_m\perp P_n for all m\neq n. Note that those polyomials are orthogonal with respect to the inner product defined above, i.e. given some finite interval (a,b). But if (a,b) changes, the polynomials will be different.

A popular family of orthogonal polynomial, on finite interval (-1,+1) is the family of Legendre polynomials, satisfying{\displaystyle \int _{-1}^{1}P_{m}(x)P_{n}(x)~\mathrm {d} x=0}as soon as m\neq n. Those polynomials satisfy Bonnet’s recursion formula{\displaystyle (n+1)P_{n+1}(x)=(2n+1)xP_{n}(x)-nP_{n-1}(x)} or Rodrigues’ formula {\displaystyle P_{n}(x)={\frac {1}{2^{n}n!}}{\frac {d^{n}}{dx^{n}}}(x^{2}-1)^{n}}The first values are here{\displaystyle P_{0}(x)=1} {\displaystyle P_{1}(x)=x}{\displaystyle P_{2}(x)={\frac {3x^{2}-1}{2}}}{\displaystyle P_{3}(x)={\frac {5x^{3}-3x}{2}}} {\displaystyle P_{4}(x)={\frac {35x^{4}-30x^{2}+3}{8}}}

Interestingly, we can get those polynomial functions using

library(orthopolynom)
(leg4coef = legendre.polynomials(n=4))
[[1]]
1 
 
[[2]]
x 
 
[[3]]
-0.5 + 1.5*x^2 
 
[[4]]
-1.5*x + 2.5*x^3 
 
[[5]]
0.375 - 3.75*x^2 + 4.375*x^4

Of course, there are many families of orthogonal polynomials (Jacobi polynomials, Laguerre polynomials, Hermite polynomials, etc). Now, in R, there is the standard poly function, that we use in polynomial regression.

x = seq(-1,1,length=101)
y = poly(x,4)
y
                   1            2             3            4
  [1,] -1.706475e-01  0.215984813 -2.480753e-01  0.270362873
  [2,] -1.672345e-01  0.203025724 -2.183063e-01  0.216290298
...
[100,]  1.672345e-01  0.203025724  2.183063e-01  0.216290298
[101,]  1.706475e-01  0.215984813  2.480753e-01  0.270362873
attr(,"coefs")
attr(,"coefs")$alpha
[1] 3.157229e-17 2.655145e-16 9.799244e-17 5.368224e-16
 
attr(,"coefs")$norm2
[1]   1.0000000 101.0000000  34.3400000   9.3377328   2.4472330   0.6330176
 
attr(,"degree")
[1] 1 2 3 4
attr(,"class")
[1] "poly"   "matrix"

But these are not Legendre polynomials… As explained in 李哲源‘s post on stackoverflow, the idea is to start with P_{-1}(x)=0, P_{0}(x)=1 and P_{1}(x)=x, and then define \ell_n=\langle P_n,P_n\rangle  as well as \alpha_n=\langle P_nP_1,P_1\rangle/\ell_n=\langle P_n^2,P_1\rangle/\ell_i= and \beta_n=\ell_n/\ell_{n-1}. Finally, define recursively{\displaystyle P_{n}(x)=(x-\alpha_{n-1})P_{n-1}(x)-\beta_{i-1}P_{i-2}(x)}and its normalized version, \tilde{P}_{n}=P_n/\sqrt{\ell_n}. That is what poly computes.

So, for pedagogical purpose, I said that I like to use y=\boldsymbol{x}^\top\boldsymbol{\beta}+\varepsilon where\boldsymbol{x}=(1,x,x^2,\cdots,xˆ{k-1},x^k)And actually, when using poly, we use the QR decomposition of that matrix. As discussed in in 李哲源‘s post, we can almost reproduce the poly function using

my_poly - function (x, degree = 1) {
    xbar = mean(x)
    x = x - xbar
    QR = qr(outer(x, 0:degree, "^"))
    X = qr.qy(QR, diag(diag(QR$qr), length(x), degree + 1))[, -1, drop = FALSE]
    X2 = X * X
    norm2 = colSums(X * X)   
    alpha = drop(crossprod(X2, x)) / norm2
    beta = norm2 / (c(length(x), norm2[-degree]))
    colnames(X) = 1:degree
    scale = sqrt(norm2)
    X = X * rep(1 / scale, each = length(x))
    X}

Nevertheless, the two models are equivalent. More precisely,

plot(cars)
reg1 = lm(dist~speed+I(speed^2)+I(speed^3),data=cars)
reg2 = lm(dist~poly(speed,3),data=cars)
u = seq(3,26,by=.1)
v1 = predict(reg1,newdata=data.frame(speed=u))
v2 = predict(reg2,newdata=data.frame(speed=u))
lines(u,v1,col="blue")
lines(u,v2,col="red",lty=2)

We have exactly the same prediction here

v1[u==15]
     121 
38.43919 
v2[u==15]
     121 
38.43919

And probably also quite interesting : the coefficients do not have the same interpretation (since we do not have the same basis), but the p-value for the highest degree is exactly the same here ! Here the two models reject, with the same confidence, the polynomial of degree three,

summary(reg1)
 
Coefficients:
             Estimate Std. Error t value Pr(>|t|)
(Intercept) -19.50505   28.40530  -0.687    0.496
speed         6.80111    6.80113   1.000    0.323
I(speed^2)   -0.34966    0.49988  -0.699    0.488
I(speed^3)    0.01025    0.01130   0.907    0.369
 
Residual standard error: 15.2 on 46 degrees of freedom
Multiple R-squared:  0.6732,	Adjusted R-squared:  0.6519 
F-statistic: 31.58 on 3 and 46 DF,  p-value: 3.074e-11
 
summary(reg2)
 
Coefficients:
                Estimate Std. Error t value Pr(>|t|)    
(Intercept)        42.98       2.15  19.988  < 2e-16 ***
poly(speed, 3)1   145.55      15.21   9.573  1.6e-12 ***
poly(speed, 3)2    23.00      15.21   1.512    0.137    
poly(speed, 3)3    13.80      15.21   0.907    0.369    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 15.2 on 46 degrees of freedom
Multiple R-squared:  0.6732,	Adjusted R-squared:  0.6519 
F-statistic: 31.58 on 3 and 46 DF,  p-value: 3.074e-11
  • B-splines regression (and GAMs)

Splines are also important in regression models, especially when we start talking about Generalized Additive Models. See Perperoglou, Sauerbrei, Abrahamowicz & Schmid (2019) for a review. In the univariate case, I introduce (linear) splines through positive parts, in the sense thaty=\beta_0+\beta_1x+\beta_2(x-s_1)_++\cdots+\beta_k(x-s_{k-1})_++\varepsilonwhere (x-s)_+ equals 0 if x<s and x-s if x>s. Those functions are nice since they are continuous, so the model is continuous (the weighted sum of continuous functions is continuous). And we can go one step further, with y=\beta_0+\beta_1x+\beta_2x^2+\beta_3(x-s_1)^2_++\cdots+\beta_k(x-s_{k-2})^2_++\varepsilonwith quadratic splines, or y=\beta_0+\beta_1x+\beta_2x^2+\beta_3x^3+\beta_4(x-s_1)^3_++\cdots+\beta_k(x-s_{k-3})^3_++\varepsilonfor cubic splines. Interestingly, quadratic splines are not only continuous, but their first derivative is also continuous (and the second one for cubic splines). So the knot discontinuity is s_1,s_2,\cdots is now invisible…

I like those models since they are easy to interprete. For example, the simple model \beta_1 x+\beta_2(x-s)_+ is the following piecewise linear function, continuous, with a “rupture” at knot s.

Observe also the following interpretation: for small values of x, there is a linear increase, with slope \beta_1, and for lager values of x, there is a linear decrease, with slope \beta_1+\beta_2. Hence, \beta_2 is interpreted as a change of the slope.

Unfortunately, it is now what R is using when using the bs function in R, which are the standard B-splines. Just to visualize (I will skip the maths here), with R, we have

library(splines)
clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02")
x = seq(5,25,by=.25)
B = bs(x,knots=c(10,20),Boundary.knots=c(5,55),degre=1)
matplot(x,B,type="l",lty=1,lwd=2,col=clr6)
B=bs(x,knots=c(10,20),Boundary.knots=c(5,55),degre=2)
matplot(x,B,type="l",col=clr6,lty=1,lwd=2)

while the functions I mentioned were (more or less) the following

pos = function(x,s) (x-s)*(x&gt;s)
par(mfrow=c(1,2))
clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02")
x = seq(5,25,by=.25)
B = cbind(pos(x,5),pos(x,10),pos(x,20))
matplot(x,B,type="l",lty=1,lwd=2,col=clr6)
pos2 = function(x,s) (x-s)^2*(x&gt;s)
B = cbind(pos(x,5)*20,pos2(x,5),pos2(x,10),pos2(x,20))
matplot(x,B,type="l",col=clr6,lty=1,lwd=2)

And as for the polynomial regression, the two models are equivalent. For example

plot(cars)
reg1 = lm(dist~speed+pos(speed,10)+pos(speed,20),data=cars)
reg2 = lm(dist~bs(speed,degree=1,knots=c(10,20)),data=cars)
v1 = predict(reg1,newdata=data.frame(speed=u))
v2 = predict(reg2,newdata=data.frame(speed=u))
lines(u,v1,col="blue")
lines(u,v2,col="red",lty=2)

or more specifically

v1[u==15]
     121 
39.35747 
v2[u==15]
     121 
39.35747

So one more time, the two models are equivalent, but I still find the approach with the positive part more intuitive, and easy to understand. As well as the interpretation of coefficients,

summary(reg1)
 
Coefficients:
               Estimate Std. Error t value Pr(&gt;|t|)  
(Intercept)     -7.6305    16.2941  -0.468   0.6418  
speed            3.0630     1.8238   1.679   0.0998 .
pos(speed, 10)   0.2087     2.2453   0.093   0.9263  
pos(speed, 20)   4.2812     2.2843   1.874   0.0673 .
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 15 on 46 degrees of freedom
Multiple R-squared:  0.6821,	Adjusted R-squared:  0.6613 
F-statistic: 32.89 on 3 and 46 DF,  p-value: 1.643e-11
 
summary(reg2)
 
Coefficients:
                                          Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)                                  4.621      9.344   0.495   0.6233    
bs(speed, degree = 1, knots = c(10, 20))1   18.378     10.943   1.679   0.0998 .  
bs(speed, degree = 1, knots = c(10, 20))2   51.094     10.040   5.089 6.51e-06 ***
bs(speed, degree = 1, knots = c(10, 20))3   88.859     12.047   7.376 2.49e-09 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 15 on 46 degrees of freedom
Multiple R-squared:  0.6821,	Adjusted R-squared:  0.6613 
F-statistic: 32.89 on 3 and 46 DF,  p-value: 1.643e-11

Here we can see directly that the first knot was not interesting (the slope did not change significantly) while the second one was…

Testing for a causal effect (with 2 time series)

A few days ago, I came back on a sentence I found (in a French newspaper), where someone was claiming that

“… an old variable explains 85% of the change in a new variable. So we can talk about causality”

and I tried to explain that it was just stupid : if we consider the regression of the temperature on day t+1 against the number of cyclist on day t, the R^2 exceeds 80%… but it is hard to claim that the number of cyclists on specific day will actually cause the temperature on the next day…

Nevertheless, that was frustrating, and I was wondering if there was a clever way to test for causality in that case. A popular one is Granger causality (I can mention a paper we published a few years ago where we use such a test, Tents, Tweets, and Events: The Interplay Between Ongoing Protests and Social Media). To explain that test, consider a bivariate time series (just like the one we have here), \boldsymbol{z}_t=(x_t,y_t), and consider some bivariate autoregressive model
{\displaystyle {\begin{bmatrix}x_{t}\\y_{t}\end{bmatrix}}={\begin{bmatrix}c_{1}\\c_{2}\end{bmatrix}}+{\begin{bmatrix}a_{1,1}&\textcolor{red}{a_{1,2}}\\\textcolor{blue}{a_{2,1}}&a_{2,2}\end{bmatrix}}{\begin{bmatrix}x_{t-1}\\y_{t-1}\end{bmatrix}}+{\begin{bmatrix}u_{t}\\v_{t}\end{bmatrix}}}where \boldsymbol{\varepsilon}_t=(u_t,v_t) is some bivariate white noise, in the sense that (i) {\displaystyle \mathbb{E} (\boldsymbol{\varepsilon}_{t})=\boldsymbol{0}} (the noise is centered) (ii) {\displaystyle \mathbb{E} (\boldsymbol{\varepsilon}_{t}\boldsymbol{\varepsilon}_{t}^\top)=\Omega } , so the variance matrix is constant, but possibly non-diagonal (iii) {\displaystyle \mathbb{E} (\boldsymbol{\varepsilon}_{t}\boldsymbol{\varepsilon}_{t-h}^\top)=\boldsymbol{0} } for all h\neq 0. Note that we can use the simplified expression{\displaystyle {\boldsymbol{z}_t=\boldsymbol{c}+\boldsymbol{A}\boldsymbol{z}_{t-1}+\boldsymbol{\varepsilon}_t}}Now, Granger test is based on several quantities. With off-diagonal terms of matrix \Omega, we have a so-called instantaneous causality, and since \Omega is symmetry, we will write x\leftrightarrow y. With off-diagonal terms of matrix \boldsymbol{A}, we have a so-called lagged causality, with either \textcolor{blue}{x\rightarrow y} or \textcolor{red}{x\leftarrow y} (and possibly both, if both terms are significant).

So I wanted to try on my two-variable problem.

df = read.csv("cyclistsTempHKI.csv")
dfts = cbind(C=ts(df$cyclists,start = c(2014, 1,2), frequency = 365),
             T=ts(df$meanTemp,start = c(2014, 1,2), frequency = 365))
library(vars)

I now have “time series” objects, and we can fit a VAR model,

var2 = VAR(dfts, p = 1, type = "const")
coefficients(var2)
$C
         Estimate   Std. Error   t value      Pr(&gt;|t|)
C.l1    0.8684009   0.02889424 30.054460 8.080226e-107
T.l1   70.3042012  20.07247411  3.502518  5.102094e-04
const 807.6394001 187.75472482  4.301566  2.110412e-05
 
$T
           Estimate   Std. Error   t value     Pr(&gt;|t|)
C.l1   0.0003865391 6.257596e-05  6.177118 1.540467e-09
T.l1   0.6611135594 4.347074e-02 15.208241 6.086394e-42
const -1.6413074565 4.066184e-01 -4.036481 6.446018e-05

For instant, we can run a causality, to test if the number of cyclists can cause the temperature (on the next day)

causality(var2, cause = "C")
$Granger
 
	Granger causality H0: C do not Granger-cause T
 
data:  VAR object var2
F-Test = 38.157, df1 = 1, df2 = 842, p-value = 1.015e-09

Here, we should clearly reject H_0, which is that there is no causal effect. Which is the way statisticians say that there should be some causal effect between the number of cyclist and the temperature…

So clearly, something is wrong here. Either it is some sort of superpower that cyclists are not aware of. Or this test that was used for forty years (Clive Granger even got a Nobel price for it) is not working. Or we missed something. Actually… I think we missed something here. Possibly because the series are not stationary. We can almost see it with

Phi = matrix(c(coefficients(var2)$C[1:2,1],coefficients(var2)$T[1:2,1]),2,2)
eigen(Phi)
eigen() decomposition
$values
[1] 0.9594810 0.5700335

where the highest eigenvalue is very close to one. But actually, we look here at the temperature…

plot(dfts)

so, at least, we should expect some seasonal unit root here. So let us use two techniques. The first one is a classical one-year difference, \Delta_{365}\boldsymbol{z}_t=\boldsymbol{z}_t-\boldsymbol{z}_{t-365}

var2 = VAR(diff(dfts,365), p = 1, type = "const")
coefficients(var2)
$C
          Estimate   Std. Error   t value     Pr(&gt;|t|)
C.l1     0.8376424   0.07259969 11.537823 1.993355e-16
T.l1    42.2638410  28.58783276  1.478386 1.449076e-01
const -507.5514795 219.40240747 -2.313336 2.440042e-02
 
$T
         Estimate   Std. Error   t value     Pr(&gt;|t|)
C.l1  0.000518209 0.0003277295 1.5812096 1.194623e-01
T.l1  0.598425288 0.1290511945 4.6371154 2.162476e-05
const 0.547828079 0.9904263469 0.5531235 5.823804e-01

The test on the fited VAR model yields

causality(var2, cause = "C") 
$Granger
 
	Granger causality H0: C do not Granger-cause T
 
data:  VAR object var2
F-Test = 2.5002, df1 = 1, df2 = 112, p-value = 0.1167

i.e., with a 11% p-value, we should reject the assumption that the number of cyclists cause the temperature (on the next day), and actually, we should also reject the other way

causality(var2, cause = "T") 
$Granger
 
	Granger causality H0: T do not Granger-cause C
 
data:  VAR object var2
F-Test = 2.1856, df1 = 1, df2 = 112, p-value = 0.1421

Nevertheless, if we look at the instantaneous causality, this one makes more sense

$Instant
 
	H0: No instantaneous causality between: T and C
 
data:  VAR object var2
Chi-squared = 13.081, df = 1, p-value = 0.0002982

The second idea would be to use a one day difference, \Delta_{1}\boldsymbol{z}_t=\boldsymbol{z}_t-\boldsymbol{z}_{t-1} and to fit a VAR model on that one

VARselect(diff(dfts,1), lag.max = 4, type="const")
$selection
AIC(n)  HQ(n)  SC(n) FPE(n) 
     3      3      2      3

but on that one, a VAR(1) model – with only one lag – might not be sufficient. It might be better to consider a VAR(3)

var2 = VAR(diff(dfts,1), p = 3, type = "const")

and on that one, one more time, we should reject the causal effect of the number of cyclists on the temperature (on the next day)

causality(var2, cause = "C")  
$Granger
 
	Granger causality H0: C do not Granger-cause T
 
data:  VAR object var2
F-Test = 0.67644, df1 = 3, df2 = 828, p-value = 0.5666

and this time, there could be a (lagged) causal effect of the temperature on the number of cyclists

causality(var2, cause = "T")  
$Granger
 
	Granger causality H0: T do not Granger-cause C
 
data:  VAR object var2
F-Test = 7.7981, df1 = 3, df2 = 828, p-value = 3.879e-05
 
$Instant
 
	H0: No instantaneous causality between: T and C
 
data:  VAR object var2
Chi-squared = 55.83, df = 1, p-value = 7.905e-14

but nothing instantaneously… So it looks like Granger causality performs well on that one !

Lasso Regression (home made)

Again, this post is related to my MAT7381 course, where we will see that it is actually possible to write our own code to compute Lasso regression, \min\left\lbrace\frac{1}{2}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|_{\ell_2}^2+\lambda\|\mathbf{\beta}\|_{\ell_1}\right\rbraceWe have to define the soft-thresholding functionS(z,\gamma)=\text{sign}(z)\cdot(|z|-\gamma)_+=\begin{cases}z-\gamma&\text{ if }\gamma>|z|\text{ and }z<0\\z+\gamma&\text{ if }\gamma<|z|\text{ and }z<0 \\0&\text{ if }\gamma\geq|z|\end{cases}The R function would be

soft_thresholding = function(x,a){
sign(x) * pmax(abs(x)-a,0)
}

To solve our optimization problem, set\mathbf{r}_j=\mathbf{y} - \left(\beta_0\mathbf{1}+\sum_{k\neq j}\beta_k\mathbf{x}_k\right)=\mathbf{y}-\widehat{\mathbf{y}}^{(j)}
so that the optimization problem can be written, equivalently
\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p [\mathbf{r}_j-\beta_j\mathbf{x}_j]^2+\lambda |\beta_j|\right\rbrace
hence\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p \beta_j^2\|\mathbf{x}_j\|-2\beta_j\mathbf{r}_j^T\mathbf{x}_j+\lambda |\beta_j|\right\rbrace
and one gets
\beta_{j,\lambda} = \frac{1}{\|\mathbf{x}_j\|^2}S(\mathbf{r}_j^T\mathbf{x}_j,n\lambda)
or, if we develop
\beta_{j,\lambda} = \frac{1}{\sum_i x_{ij}^2}S\left(\sum_ix_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)
Again, if there are weights \mathbf{\omega}=(\omega_i), the coordinate-wise update becomes
\beta_{j,\lambda,{\color{red}{\omega}}} = \frac{1}{\sum_i {\color{red}{\omega_i}}x_{ij}^2}S\left(\sum_i{\color{red}{\omega_i}}x_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)
The code to compute this componentwise descent is

lasso_coord_desc = function(X,y,beta,lambda,tol=1e-6,maxiter=1000){
  beta = as.matrix(beta)
  X = as.matrix(X)
  omega = rep(1/length(y),length(y))
  obj = numeric(length=(maxiter+1))
  betalist = list(length(maxiter+1))
  betalist[[1]] = beta
  beta0list = numeric(length(maxiter+1))
  beta0 = sum(y-X%*%beta)/(length(y))
  beta0list[1] = beta0
  for (j in 1:maxiter){
    for (k in 1:length(beta)){
      r = y - X[,-k]%*%beta[-k] - beta0*rep(1,length(y))
      beta[k] = (1/sum(omega*X[,k]^2))*
        soft_thresholding(t(omega*r)%*%X[,k],length(y)*lambda)
    }
    beta0 = sum(y-X%*%beta)/(length(y))
    beta0list[j+1] = beta0
    betalist[[j+1]] = beta
    obj[j] = (1/2)*(1/length(y))*norm(omega*(y - X%*%beta - 
           beta0*rep(1,length(y))),'F')^2 + lambda*sum(abs(beta))
    if (norm(rbind(beta0list[j],betalist[[j]]) - 
             rbind(beta0,beta),'F') &lt; tol) { break } 
  } 
  return(list(obj=obj[1:j],beta=beta,intercept=beta0)) }

For instance, consider the following (simple) dataset, with three covariates

chicago = read.table("http://freakonometrics.free.fr/chicago.txt",header=TRUE,sep=";")

that we can “normalize” (or “standardize“)

X = model.matrix(lm(Fire~.,data=chicago))[,2:4]
for(j in 1:3) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
y = chicago$Fire
y = (y-mean(y))/sd(y)

To initialize the algorithm, use the OLS estimate

beta_init = lm(Fire~0+.,data=chicago)$coef

For instance

lasso_coord_desc(X,y,beta_init,lambda=.001)
$obj
[1] 0.001014426 0.001008009 0.001009558 0.001011094 0.001011119 0.001011119
 
$beta
          [,1]
X_1  0.0000000
X_2  0.3836087
X_3 -0.5026137
 
$intercept
[1] 2.060999e-16

and we can get the standard Lasso plot by looping,

Quantile Regression (home made, part 2)

A few months ago, I posted a note with some home made codes for quantile regression… there was something odd on the output, but it was because there was a (small) mathematical problem in my equation. So since I should teach those tomorrow, let me fix them.

Median

Consider a sample \{y_1,\cdots,y_n\}. To compute the median, solve\min_\mu \left\lbrace\sum_{i=1}^n|y_i-\mu|\right\rbracewhich can be solved using linear programming techniques. More precisely, this problem is equivalent to\min_{\mu,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^na_i+b_i\right\rbracewith a_i,b_i\geq 0 and y_i-\mu=a_i-b_i, \forall i=1,\cdots,n. Heuristically, the idea is to write y_i=\mu+\varepsilon_i, and then define a_i‘s and b_i‘s so that \varepsilon_i=a_i-b_i and |\varepsilon_i|=a_i+b_i, i.e. a_i=(\varepsilon_i)_+=\max\lbrace0,\varepsilon_i\rbrace=|\varepsilon|\cdot\boldsymbol{1}_{\varepsilon_i>0}andb_i=(-\varepsilon_i)_+=\max\lbrace0,-\varepsilon_i\rbrace=|\varepsilon|\cdot\boldsymbol{1}_{\varepsilon_i<0}denote respectively the positive and the negative parts.

Unfortunately (that was the error in my previous post), the expression of linear programs is\min_{\mathbf{z}}\left\lbrace\boldsymbol{c}^\top\mathbf{z}\right\rbrace\text{ s.t. }\boldsymbol{A}\mathbf{z}=\boldsymbol{b},\mathbf{z}\geq\boldsymbol{0}In the equation above, with the a_i‘s and b_i‘s, we’re not far away. Except that we have \mu\in\mathbb{R}, while it should be positive. So similarly, set \mu=\mu^+-\mu^- where \mu^+=(\mu)_+ and \mu^-=(-\mu)_+.

Thus, let\mathbf{z}=\big(\mu^+;\mu^-;\boldsymbol{a},\boldsymbol{b}\big)^\top\in\mathbb{R}_+^{2n+2}and then write the constraint as \boldsymbol{A}\mathbf{z}=\boldsymbol{b} with \boldsymbol{b}=\boldsymbol{y} and \boldsymbol{A}=\big[\boldsymbol{1}_n;-\boldsymbol{1}_n;\mathbb{I}_n;-\mathbb{I}_n\big]And for the objective function\boldsymbol{c}=\big(\boldsymbol{0},\boldsymbol{1}_n,-\boldsymbol{1}_n\big)^\top\in\mathbb{R}_+^{2n+2}

To illustrate, consider a sample from a lognormal distribution,

n = 101 
set.seed(1)
y = rlnorm(n)
median(y)
[1] 1.077415

For the optimization problem, use the matrix form, with 3n constraints, and 2n+1 parameters,

library(lpSolve) 
X = rep(1,n) 
A = cbind(X, -X, diag(n), -diag(n))
b = y
c = c(rep(0,2), rep(1,n),rep(1,n))
equal_type = rep("=", n) 
r = lp("min", c,A,equal_type,b)
head(r$solution,1)
[1] 1.077415

It looks like it’s working well…

Quantile

Of course, we can adapt our previous code for quantiles

tau = .3
quantile(y,tau)
      30% 
0.6741586

The linear program is now\min_{q^+,q^-,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^n\tau a_i+(1-\tau)b_i\right\rbracewith a_i,b_i,q^+,q^-\geq 0 and y_i=q^+-q^-+a_i-b_i, \forall i=1,\cdots,n. The R code is now

c = c(rep(0,2), tau*rep(1,n),(1-tau)*rep(1,n))
r = lp("min", c,A,equal_type,b)
head(r$solution,1)
[1] 0.6741586

So far so good…

Quantile Regression

Consider the following dataset, with rents of flat, in a major German city, as function of the surface, the year of construction, etc.

base=read.table("http://freakonometrics.free.fr/rent98_00.txt",header=TRUE)

The linear program for the quantile regression is now\min_{\boldsymbol{\beta}^+,\boldsymbol{\beta}^-,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^n\tau a_i+(1-\tau)b_i\right\rbracewith a_i,b_i\geq 0 and y_i=\boldsymbol{x}^\top[\boldsymbol{\beta}^+-\boldsymbol{\beta}^-]+a_i-b_i\forall i=1,\cdots,n and \beta_j^+,\beta_j^-\geq 0 \forall j=0,\cdots,k. So use here

require(lpSolve) 
tau = .3
n=nrow(base)
X = cbind( 1, base$area)
y = base$rent_euro
K = ncol(X)
N = nrow(X)
A = cbind(X,-X,diag(N),-diag(N))
c = c(rep(0,2*ncol(X)),tau*rep(1,N),(1-tau)*rep(1,N))
b = base$rent_euro
const_type = rep("=",N)
r = lp("min",c,A,const_type,b)
beta = r$sol[1:K] -  r$sol[(1:K+K)]
beta
[1] 148.946864   3.289674

Of course, we can use R function to fit that model

library(quantreg)
rq(rent_euro~area, tau=tau, data=base)
Coefficients:
(Intercept)        area 
 148.946864    3.289674

Here again, it seems to work quite well. We can use a different probability level, of course, and get a plot

plot(base$area,base$rent_euro,xlab=expression(paste("surface (",m^2,")")),
     ylab="rent (euros/month)",col=rgb(0,0,1,.4),cex=.5)
sf=0:250
yr=r$solution[2*n+1]+r$solution[2*n+2]*sf
lines(sf,yr,lwd=2,col="blue")
tau = .9
r = lp("min",c,A,const_type,b)
tail(r$solution,2)
[1] 121.815505   7.865536
yr=r$solution[2*n+1]+r$solution[2*n+2]*sf
lines(sf,yr,lwd=2,col="blue")

And we can adapt the later to multiple regressions, of course,

X = cbind(1,base$area,base$yearc)
K = ncol(X)
N = nrow(X)
A = cbind(X,-X,diag(N),-diag(N))
c = c(rep(0,2*ncol(X)),tau*rep(1,N),(1-tau)*rep(1,N))
b = base$rent_euro
const_type = rep("=",N)
r = lp("min",c,A,const_type,b)
beta = r$sol[1:K] -  r$sol[(1:K+K)]
beta
[1] -5542.503252     3.978135     2.887234

to be compared with

library(quantreg)
rq(rent_euro~ area + yearc, tau=tau, data=base)
 
Coefficients:
 (Intercept)         area        yearc 
-5542.503252     3.978135     2.887234 
 
Degrees of freedom: 4571 total; 4568 residual

On Cochran Theorem (and Orthogonal Projections)

Cochran Theorem – from The distribution of quadratic forms in a normal system, with applications to the analysis of covariance published in 1934 – is probably the most import one in a regression course. It is an application of a nice result on quadratic forms of Gaussian vectors. More precisely, we can prove that if \boldsymbol{Y}\sim\mathcal{N}(\boldsymbol{0},\mathbb{I}_d) is a random vector with d \mathcal{N}(0,1) variable then (i) if A is a (squared) idempotent matrix \boldsymbol{Y}^\top A\boldsymbol{Y}\sim\chi^2_r where r is the rank of matrix A, and (ii) conversely, if \boldsymbol{Y}^\top A\boldsymbol{Y}\sim\chi^2_r then A is an idempotent matrix of rank r. And just in case, A is an idempotent matrix means that A^2=A, and a lot of results can be derived (for instance on the eigenvalues). The prof of that result (at least the (i) part) is nice: we diagonlize matrix A, so that A=P\Delta P^\top, with P orthonormal. Since A is an idempotent matrix observe thatA^2=P\Delta P^\top=P\Delta P^\top=P\Delta^2 P^\topwhere \Delta is some diagonal matrix such that \Delta^2=\Delta, so terms on the diagonal of \Delta are either 0 or 1‘s. And because the rank of A (and \Delta) is r then there should be r 1‘s and d-r 1‘s. Now write\boldsymbol{Y}^\top A\boldsymbol{Y}=\boldsymbol{Y}^\top P\Delta P^\top\boldsymbol{Y}=\boldsymbol{Z}^\top \Delta\boldsymbol{Z}where \boldsymbol{Z}=P^\top\boldsymbol{Y} that satisfies\boldsymbol{Z}\sim\mathcal{N}(\boldsymbol{0},PP^\top) i.e. \boldsymbol{Z}\sim\mathcal{N}(\boldsymbol{0},\mathbb{I}_d). Thus \boldsymbol{Z}^\top \Delta\boldsymbol{Z}=\sum_{i:\Delta_{i,i}-1}Z_i^2\sim\chi^2_rNice, isn’t it. And there is more (that will be strongly connected actually to Cochran theorem). Let A=A_1+\dots+A_k, then the two following statements are equivalent (i) A is idempotent and \text{rank}(A)=\text{rank}(A_1)+\dots+\text{rank}(A_k) (ii) A_i‘s are idempotents, A_iA_j=0 for all i\neq j.

Now, let us talk about projections. Let \boldsymbol{y} be a vector in \mathbb{R}^n. Its projection on the space \mathcal V(\boldsymbol{v}_1,\dots,\boldsymbol{v}_p) (generated by those p vectors) is the vector \hat{\boldsymbol{y}}=\boldsymbol{V} \hat{\boldsymbol{a}} that minimizes \|\boldsymbol{y} -\boldsymbol{V} \boldsymbol{a}\| (in \boldsymbol{a}). The solution is\hat{\boldsymbol{a}}=( \boldsymbol{V}^\top \boldsymbol{V})^{-1} \boldsymbol{V}^\top \boldsymbol{y} \text{ and } \hat{\boldsymbol{y}} = \boldsymbol{V} \hat{\boldsymbol{a}}
Matrix P=\boldsymbol{V} ( \boldsymbol{V}^\top \boldsymbol{V})^{-1} \boldsymbol{V}^\top is the orthogonal projection on \{\boldsymbol{v}_1,\dots,\boldsymbol{v}_p\} and \hat{\boldsymbol{y}} = P\boldsymbol{y}.

Now we can recall Cochran theorem. Let \boldsymbol{Y}\sim\mathcal{N}(\boldsymbol{\mu},\sigma^2\mathbb{I}_d) for some \sigma>0 and \boldsymbol{\mu}. Consider sub-vector orthogonal spaces F_1,\dots,F_m, with dimension d_i. Let P_{F_i} be the orthogonal projection matrix on F_i, then (i) vectors P_{F_1}\boldsymbol{X},\dots,P_{F_m}\boldsymbol{X} are independent, with respective distribution \mathcal{N}(P_{F_i}\boldsymbol{\mu},\sigma^2\mathbb{I}_{d_i}) and (ii) random variables \|P_{F_i}(\boldsymbol{X}-\boldsymbol{\mu})\|^2/\sigma^2 are independent and \chi^2_{d_i} distributed.

We can try to visualize those results. For instance, the orthogonal projection of a random vector has a Gaussian distribution. Consider a two-dimensional Gaussian vector

library(mnormt)
r = .7
s1 = 1
s2 = 1
Sig = matrix(c(s1^2,r*s1*s2,r*s1*s2,s2^2),2,2)
Sig
Y = rmnorm(n = 1000,mean=c(0,0),varcov = Sig)
plot(Y,cex=.6)
vu = seq(-4,4,length=101)
vz = outer(vu,vu,function (x,y) dmnorm(cbind(x,y),
mean=c(0,0), varcov = Sig))
contour(vu,vu,vz,add=TRUE,col='blue')
abline(a=0,b=2,col="red")

Consider now the projection of points \boldsymbol{y}=(y_1,y_2) on the straight linear with directional vector \overrightarrow{\boldsymbol{u}} with slope a (say a=2). To get the projected point \boldsymbol{x}=(x_1,x_2) recall that x_2=ay_1 and \overrightarrow{\boldsymbol{x},\boldsymbol{y}}\perp\overrightarrow{\boldsymbol{u}}. Hence, the following code will give us the orthogonal projections

p = function(a){
x0=(Y[,1]+a*Y[,2])/(1+a^2)
y0=a*x0
cbind(x0,y0)
}

with

P = p(2)
for(i in 1:20) segments(Y[i,1],Y[i,2],P[i,1],P[i,2],lwd=4,col="red")
points(P[,1],P[,2],col="red",cex=.7)

Now, if we look at the distribution of points on that line, we get… a Gaussian distribution, as expected,

z = sqrt(P[,1]^2+P[,2]^2)*c(-1,+1)[(P[,1]>0)*1+1]
vu = seq(-6,6,length=601)
vv = dnorm(vu,mean(z),sd(z))
hist(z,probability = TRUE,breaks = seq(-4,4,by=.25))
lines(vu,vv,col="red")

Or course, we can use the matrix representation to get the projection on \overrightarrow{\boldsymbol{u}}, or a normalized version of that vector actually

a=2
U = c(1,a)/sqrt(a^2+1)
U
[1] 0.4472136 0.8944272
matP = U %*% solve(t(U) %*% U) %*% t(U)
matP %*% Y[1,]
[,1]
[1,] -0.1120555
[2,] -0.2241110
P[1,]
x0 y0
-0.1120555 -0.2241110

(which is consistent with our manual computation). Now, in Cochran theorem, we start with independent random variables,

Y = rmnorm(n = 1000,mean=c(0,0),varcov = diag(c(1,1)))

Then we consider the projection on \overrightarrow{\boldsymbol{u}} and \overrightarrow{\boldsymbol{v}}=\overrightarrow{\boldsymbol{u}}^\perp

U = c(1,a)/sqrt(a^2+1)
matP1 = U %*% solve(t(U) %*% U) %*% t(U)
P1 = Y %*% matP1
z1 = sqrt(P1[,1]^2+P1[,2]^2)*c(-1,+1)[(P1[,1]>0)*1+1]
V = c(a,-1)/sqrt(a^2+1)
matP2 = V %*% solve(t(V) %*% V) %*% t(V)
P2 = Y %*% matP2
z2 = sqrt(P2[,1]^2+P2[,2]^2)*c(-1,+1)[(P2[,1]>0)*1+1]

We can plot those two projections

plot(z1,z2)

and observe that the two are indeed, independent Gaussian variables. And (of course) there squared norms are \chi^2_{1} distributed.

On the conjugate function

In the MAT7381 course (graduate course on regression models), we will talk about optimization, and a classical tool is the so-called conjugate. Given a function f:\mathbb{R}^p\to\mathbb{R} its conjugate is function f^{\star}:\mathbb{R}^p\to\mathbb{R} such that f^{\star}(\boldsymbol{y})=\max_{\boldsymbol{x}}\lbrace\boldsymbol{x}^\top\boldsymbol{y}-f(\boldsymbol{x})\rbraceso, long story short, f^{\star}(\boldsymbol{y}) is the maximum gap between the linear function \boldsymbol{x}^\top\boldsymbol{y} and f(\boldsymbol{x}).

Just to visualize, consider a simple parabolic function (in dimension 1) f(x)=x^2/2, then f^{\star}(\color{blue}{2}) is the maximum gap between the line x\mapsto\color{blue}{2}x and function f(x).

x = seq(-100,100,length=6001)
f = function(x) x^2/2
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)

We can see it on the figure below.

viz = function(x0=1,YL=NA){
idx=which(abs(x)<=3) par(mfrow=c(1,2)) plot(x[idx],vf[idx],type="l",xlab="",ylab="",col="blue",lwd=2) abline(h=0,col="grey") abline(v=0,col="grey") idx2=which(x0*x>=vf)
polygon(c(x[idx2],rev(x[idx2])),c(vf[idx2],rev(x0*x[idx2])),col=rgb(0,1,0,.3),border=NA)
abline(a=0,b=x0,col="red")
i=which.max(x0*x-vf)
segments(x[i],x0*x[i],x[i],f(x[i]),lwd=3,col="red")
if(is.na(YL)) YL=range(vfstar[idx])
plot(x[idx],vfstar[idx],type="l",xlab="",ylab="",col="red",lwd=1,ylim=YL)
abline(h=0,col="grey")
abline(v=0,col="grey")
segments(x0,0,x0,fstar(x0),lwd=3,col="red")
points(x0,fstar(x0),pch=19,col="red")
}
viz(1)

or

viz(1.5)

In that case, we can actually compute f^{\star}, since f^{\star}(y)=\max_{x}\lbrace xy-f(x)\rbrace=\max_{x}\lbrace xy-x^2/2\rbraceThe first order condition is here x^{\star}=y and thusf^{\star}(y)=\max_{x}\lbrace xy-x^2/2\rbrace=\lbrace x^{\star}y-(x^{\star})^2/2\rbrace=\lbrace y^2-y^2/2\rbrace=y^2/2And actually, that can be related to two results. The first one is to observe that f(\boldsymbol{x})=\|\boldsymbol{x}\|_2^2/2 and in that case f^{\star}(\boldsymbol{y})=\|\boldsymbol{y}\|_2^2/2 from the following general result : if f(\boldsymbol{x})=\|\boldsymbol{x}\|_p^p/p with p>1, where \|\cdot\|_p denotes the standard \ell_p norm, then f^{\star}(\boldsymbol{y})=\|\boldsymbol{y}\|_q^q/q where\frac{1}{p}+\frac{1}{q}=1The second one is the conjugate of a quadratic function. More specifically if f(\boldsymbol{x})=\boldsymbol{x}^{\top}\boldsymbol{Q}\boldsymbol{x}/2 for some definite positive matrix \boldsymbol{Q}f^{\star}(\boldsymbol{y})=\boldsymbol{y}^{\top}\boldsymbol{Q}^{-1}\boldsymbol{y}/2. In our case, it was a univariate problem with \boldsymbol{Q}=1.

For the conjugate of the \ell_p norm, we can use the following code to visualize it

p = 3
f = function(x) abs(x)^p/p
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1.5)

or

p = 1.1
f = function(x) abs(x)^p/p
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1, YL=c(0,10))

Actually, in that case, we almost visualize that if f(x)=|x| then\displaystyle{f^{\star}\left(y\right)={\begin{cases}0,&\left|y\right|\leq 1\\\infty ,&\left|y\right|>1.\end{cases}}}

To conclude, another popular case, f(x)=\exp(x) then{\displaystyle f^{\star}\left(y\right)={\begin{cases}y\log(y)-y,&y>0\\0,&y=0\\\infty ,&y<0.\end{cases}}}We can visualize that case below

f = function(x) exp(x)
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1,YL=c(-3,3))

Combining automatically factor levels with trees

Last year, in a post, I discussed how to merge levels of factor variables, using combinatorial techniques (it was for my STT5100 cours, and trees are not in the syllabus), with an extension on trees at the end of the post.

consider the following (simulated dataset)

n=200
set.seed(1)
x1=runif(n)
x2=runif(n)
y=1+2*x1-x2+rnorm(n,0,.2)
LB=sample(LETTERS[1:10])
b=data.frame(y=y,x1=x1,
  x2=cut(x2,breaks=
  c(-1,.05,.1,.2,.35,.4,.55,.65,.8,.9,2),
  labels=LB))
str(b)
'data.frame':	200 obs. of  3 variables:
 $ y : num  1.345 1.863 1.946 2.481 0.765 ...
 $ x1: num  0.266 0.372 0.573 0.908 0.202 ...
 $ x2: Factor w/ 10 levels "I","A","H","F",..: 4 4 6 4 3 6 7 3 4 8 ...
table(b$x2)[LETTERS[1:10]]
 
 A  B  C  D  E  F  G  H  I  J 
11 12 23 34 23 36 12 32  3 14

Just by looking at the data (see the previous post), we could easily get the feeling that 10 levels was too much.

Following my post, Przemyslaw sent a comment suggesting to use

library(factorMerger)

It is indeed a nice package (unless you have really really big datasets with a lot of categories in your factor variables – as I experienced recently), and you can get great graphs

MF = mergeFactors(response = b$y, 
             factor = b$x2, 
             family = "gaussian")
plot(MF)

Here is suggests to create three categories. Recall that with student t-tests (changing the reference), we got

Another interesting package, by Piro Polo, is

library(tree.bins)

To use it, we simply call the following function, and we transform automatically our dataset : the continuous variables remain unchanged, and (possibly) categories of categorical variables are merged

b.bins = tree.bins(data=b, y=y)
str(b.bins)
Classes ‘data.table’ and 'data.frame':	200 obs. of  3 variables:
 $ y : num  1.345 1.863 1.946 2.481 0.765 ...
 $ x1: num  0.266 0.372 0.573 0.908 0.202 ...
 $ x2: chr  "Group.4" "Group.4" "Group.4" "Group.4" ...
 - attr(*, ".internal.selfref")= 
table(b.bins$x2)

Group.1 Group.2 Group.3 Group.4 
     23      35      26     116

here in four groups. To get the correspondance, use

tree.bins(data=b, y=y, return = "lkup.list")
[[1]]
   x2 Categories
1   E    Group.1
2   G    Group.2
3   C    Group.2
4   B    Group.3
5   J    Group.3
6   I    Group.4
7   A    Group.4
8   H    Group.4
9   F    Group.4
10  D    Group.4

(we have a list with one element, one dataframe, since there is only one factor variable). Cool, isn’t it ? I miss Przemyslaw’s plot, but this is rather quick, and efficient..

 

On leverage

Last week, in our STT5100 (applied linear models) class, I’ve introduce the hat matrix, and the notion of leverage. In a classical regression model, \boldsymbol{y}=\boldsymbol{X}\boldsymbol{\beta} (in a matrix form), the ordinary least square estimator of parameter \boldsymbol{\beta} is \widehat{\boldsymbol{\beta}}=(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top\boldsymbol{y}The prediction can then be written\widehat{\boldsymbol{y}}=\boldsymbol{X}\widehat{\boldsymbol{\beta}}=\underbrace{\color{blue}{\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top}}_{\color{blue}{\boldsymbol{H}}}\boldsymbol{y}where \color{blue}{\boldsymbol{H}} is called the hat matrix.

The matrix is idempotent, i.e. \boldsymbol{H}^2={\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\textcolor{grey}{\boldsymbol{X}^\top{\boldsymbol{X}}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}}\boldsymbol{X}^\top}={\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top}=\boldsymbol{H}so it can be interpreted as a projection matrix. Furthermore, since\boldsymbol{H}\boldsymbol{X}=\boldsymbol{X} (just do the maths), the projection is on a subspace that contains all the linear combinations of columns of \boldsymbol{X}. One can also observe that \mathbb{I}-\boldsymbol{H} is also a projection matrix. And we can write\boldsymbol{y}=\underbrace{\boldsymbol{H}\boldsymbol{y}}_{\widehat{\boldsymbol{y}}}+\underbrace{(\mathbb{I}-\boldsymbol{H})\boldsymbol{y}}_{\widehat{\boldsymbol{\varepsilon}}}where \widehat{\boldsymbol{y}} is the orthogonal projection of \boldsymbol{y} on the (linear) space of linear combinations of columns of \boldsymbol{X}, and \widehat{\boldsymbol{y}}\perp\widehat{\boldsymbol{\varepsilon}}, which gives the classical interpretation of residuals, being unpredictible (at least with a linear model using variables \boldsymbol{X}).

Let’s move a bit faster now (we’ve seen many other properties last week), and consider elements on the diagonal of matrix \boldsymbol{H}. Recall that we have

so entry \boldsymbol{H}_{i,i} is a measure of the influence of entry \boldsymbol{y}_i on its prediction latex]\widehat{\boldsymbol{y}}_i[/latex].

We have seen that\sum_{i=1}^n\boldsymbol{H}_{i,i}=\text{trace}(\boldsymbol{H})=\text{trace}(\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top)which can be written\sum_{i=1}^n\boldsymbol{H}_{i,i}=\text{trace}\boldsymbol{X}^\top(\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1})=\text{trace}(\mathbb{I})=pwhere classically p=k+1, where k is the number of explanatory variables. Further, since \boldsymbol{H} is idempotent, we can write (from \boldsymbol{H}=\boldsymbol{H}^2) that\boldsymbol{H}_{i,i}=\boldsymbol{H}_{i,i}^2 + \sum_{j\neq i}\boldsymbol{H}_{i,j}\boldsymbol{H}_{j,i}=\boldsymbol{H}_{i,i}^2 + \sum_{j\neq i}\boldsymbol{H}_{i,j}^2One the one hand, since the second term is positive \boldsymbol{H}_{i,i}\geq\boldsymbol{H}_{i,i}^2, i.e. 1\geq\boldsymbol{H}_{i,i}. And since both terms are positive, then \boldsymbol{H}_{i,i}\in[0,1]. And there was a question in the course on the sharpeness of the bounds.

Using Anscombe’s dataset, we’ve seen that it was possible to get a leverage of 1. Using something rather similar

df = data.frame(x = c(rep(1,10),6), y = c(1:10,8))
plot(df)

we obtain

model = lm(y~x,data=df)
abline(model,col="red",lwd=2)
H = lm.influence(model)$hat
plot(1:11,H,type="h")

The very last observation, the one one the right, is here extremely influencial : if we remove it, the model is completely different ! And here, we reach the upper bound, \boldsymbol{H}_{11,11}=1. Observe that all other points are equally influencial, and because on the constraint on the trace of the matrix, \boldsymbol{H}_{i,i}=1/10 when i\in\{1,2,\cdots,10\}.

Now, what about the lower bound ? In order to have some sort of “non-influencial” observations, consider the two following case.

  • the case where one observation (below the first one) is such that \widehat{\boldsymbol{y}}_{i}=\boldsymbol{y}_{i} (perfect prediction)
  • the case where one observation (below the tenth one) is such that \boldsymbol{x}_{i}=\overline{\boldsymbol{x}} and \boldsymbol{y}_{i}=\overline{\boldsymbol{y}} (from the first order condition – or normal equation), the fitted regression line always go through point (\overline{\boldsymbol{x}},\overline{\boldsymbol{y}})

Let us move two observations from our dataset,

mean(c(4,rep(1,8),6))
[1] 1.8
df = data.frame(x = c(4,rep(1,8),6,1.8),
y = c(predict(model,newdata=data.frame(x=4)),
2:9,8,
predict(model,newdata=data.frame(x=1.8))))

We now have

If we compute the leverages, we obtain

model = lm(y~x,data=df)
H = lm.influence(model)$hat
plot(1:11,H,type="h")

so, for the first observation, its leverage actually increased (the blue part), and for the tenth one, we have the lowest influence, but it is not zero. Is it possible to reach zero ?

Here, observe that for the tenth observation, \boldsymbol{H}_{i,i}=1/n. And actually, that’s the best we can do… We can prove that, in the case of a simple regression (as above)\boldsymbol{H}_{i,i}=\frac{1}{n}+\frac{(x_i-\overline{x})^2}{n\text{Var}(x)}which is minimum when x_i=\overline{x}, and then \boldsymbol{H}_{i,i}=1/n, otherwise \boldsymbol{H}_{i,i}>1/n. And this property is also valid in a multiple regression (as soon as an intercept is included in the regression – which should always be the case). To prove that result, let \tilde{\boldsymbol{X}} denote the matrix of centered variables \boldsymbol{X}, then we can prove that \boldsymbol{H}_{i,i}=\frac{1}{n}+\big[\tilde{\boldsymbol{X}}(\tilde{\boldsymbol{X}}^\top\tilde{\boldsymbol{X}})^{-1}\tilde{\boldsymbol{X}}^\top\big]_{i,i}(which is basically a matrix version of the previous equation).

I can maybe add another comment on Anscombe’s data. We’ve seen that on the right that we did reach 1. But I did not prove it. One way to prove it is actually to focus on the remaining n-1 points, on the left. Those have all the same x values. We can prove that if \boldsymbol{X}_{i_1}=\boldsymbol{X}_{i_2}, then \boldsymbol{H}_{i_1,i_2}=\boldsymbol{X}_{i_1}^\top(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}_{i_2}=\boldsymbol{H}_{i_1,i_1}hence, using the relationship obtained since the hat matrix is idempotent\boldsymbol{H}_{i_1,i_1}=2\boldsymbol{H}_{i_1,i_1}^2+\sum_{j\notin\{i_1,i_2\}}\boldsymbol{H}_{i_1,j}^2thus, we now have\boldsymbol{H}_{i_1,i_1}\big(1-2\boldsymbol{H}_{i_1,i_1}\big)>0i.e. \boldsymbol{H}_{i_1,i_1}\in[0,1/2], where the upper bound becomes 1/(n-1) “duplicates”. So for n-1 \boldsymbol{H}_{i,i}‘s, we have values below 1/(n-1), the last one should be below 1 and the sum has to be k=2 . So we have the value of the n \boldsymbol{H}_{i,i}‘s.