Tag Archives: R-english

Some general thoughts on Partial Dependence Plots with correlated covariates

The partial dependence plot is a nice tool to analyse the impact of some explanatory variables when using nonlinear models, such as a random forest, or some gradient boosting.The idea (in dimension 2), given a model m(x_1,x_2) for \mathbb{E}[Y|X_1=x_1,X_2=x_2]. The partial dependence plot for variable x_1 is model m is function p_1 defined as x_1\mapsto\mathbb{E}_{\mathbb{P}_{X_2}}[m(x_1,X_2)]. This can be approximated, using some dataset using \widehat{p}_1(x_1)=\frac{1}{n}\sum_{i=1}^n m(x_1,x_{2,i})My concern here what the interpretation of that plot when there are some (strongly) correlated covariates. Let us generate some dataset to start with

n=1000
library(mnormt)
r=.7
set.seed(1234)
X = rmnorm(n,mean = c(0,0),varcov = matrix(c(1,r,r,1),2,2))
Y = 1+X[,1]-2*X[,2]+rnorm(n)/2
df = data.frame(Y=Y,X1=X[,1],X2=X[,2])

As we can see, the true model is here is y_i=\beta_0+\beta_1 x_{1,i}+\beta_2x_{2,i}+\varepsilon_i where \beta_1 =1 but the two variables are positively correlated, and the second one has a strong negative impact. Note that here

reg = lm(Y~.,data=df)
summary(reg)
 
Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  1.01414    0.01601   63.35   <2e-16 ***
X1           1.02268    0.02305   44.37   <2e-16 ***
X2          -2.03248    0.02342  -86.80   <2e-16 ***

If we estimate a wrongly specified model y_i=b_0+b_1 x_{1,i}+\eta_i, we would get

reg1 = lm(Y~X1,data=df)
summary(reg1)
 
Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  1.03522    0.04680  22.121   <2e-16 ***
X1          -0.44148    0.04591  -9.616   <2e-16 ***

Thus, on the proper model, \widehat{\beta}_1\sim+1.02 while \widehat{b}_1\sim-0.44  on the mispecified model.

Now, let us look at the parial dependence plot of the good model, using standard R dedicated packages,

library(pdp) 
pdp::partial(reg, pred.var = "X1", plot = TRUE,
              plot.engine = "ggplot2")

which is the linear line y=1+x, that corresponds to y=\beta_0+\beta_1x.

library(DALEX)
plot(DALEX::single_variable(DALEX::explain(reg,
data=df),variable = "X1",type = "pdp"))

which corresponds to the previous graph. Here, it is also possible to creaste our own function to compute that partial dependence plot,

pdp1 = function(x1){
  nd = data.frame(X1=x1,X2=df$X2)
  mean(predict(reg,newdata=nd))
}

that will be the straight line below (the dotted line is the theoretical one y=1+x,

vx=seq(-3.5,3.5,length=101)
vpdp1 = Vectorize(pdp1)(vx)
plot(vx,vpdp1,type="l")
abline(a=1,b=1,lty=2)

which is very different from the univariate regression on x_1

abline(reg1,col="red")

Actually, the later is very consistent with a local regression, only on x_1

library(locfit)
lines(locfit(Y~X1,data=df),col="blue")

Now, to get back to the definition of the partial dependence plot, x_1\mapsto\mathbb{E}_{\mathbb{P}_{X_2}}[m(x_1,X_2)], in the context of correlated variable, I was wondering if it would not make more sense to consider some local version actually, something like x_1\mapsto\mathbb{E}_{\mathbb{P}_{X_2|X_1}}[m(x_1,X_2)]. My intuition was that, somehow, it did not make any sense to consider any X_2 while X_1 was fixed (and equal to x_1). But it would make more sense actually to look at more valid X_2‘s given the value of X_1. And a natural estimate could be some k neareast-neighbors, i.e. \tilde{p}_1(x_1)=\frac{1}{k}\sum_{i\in\mathcal{V}_k(x)}^n m(x_1,x_{2,i}) where \mathcal{V}_k(x) is the set of indices of the k x_i‘s that are the closest to x, i.e.

lpdp1 = function(x1){
  nd = data.frame(X1=x1,X2=df$X2)
  idx = rank(abs(df$X1-x1))
  mean(predict(reg,newdata=nd[idx<50,]))
}
vlpdp1 = Vectorize(lpdp1)(vx)
lines(vx,vlpdp1,col="darkgreen",lwd=2)

Surprisingly (?), this local partial dependence plot gives a curve that corresponds to the simple regression…

Lilliefors, Kolmogorov-Smirnov and cross-validation

In statistics, Kolmogorov–Smirnov test is a popular procedure to test, from a sample \{x_1,\cdots,x_n\} is drawn from a distribution F, or usually F_{\theta_0}, where F_{\theta} is some parametric distribution. For instance, we can test H_0:X_i\sim\mathcal{N(0,1)} (where \theta_0=(\mu_0,\sigma_0^2)=(0,1)) using that test. More specifically, I wanted to discuss today p-values. Given n let us draw \mathcal{N}(0,1) samples of size n, and compute the p-values of Kolmogorov–Smirnov tests

n=300
p = rep(NA,1e5)
for(s in 1:1e5){
X = rnorm(n,0,1)
p[s] = ks.test(X,"pnorm",0,1)$p.value
}

We can visualise the distribution of the p-values below (I added some Beta distribution fit here)

library(fitdistrplus)
fit.dist = fitdist(p,"beta")
hist(p,probability = TRUE,main="",xlab="",ylab="")
vu = seq(0,1,by=.01)
vv = dbeta(vu,shape1 = fit.dist$estimate[1], shape2 = fit.dist$estimate[2])
lines(vu,vv,col="dark red", lwd=2)

It looks like it is quite uniform (theoretically, the p-value is uniform). More specifically, the p-value was lower than 5% in 5% of the samples

[note: here I compute ‘mean(p<=.05)’ but I have some trouble with the ‘<‘ and ‘>’ symbols, as always]

mean(p&lt;=.05)
[1] 0.0479

i.e. we wrongly reject H_0:X_i\sim\mathcal{N(0,1)} is 5% of the samples.

As discussed previously on the blog, in many cases, we do care about the distribution, and not really the parameters, so we wish to test something like H_0:X_i\sim\mathcal{N(\mu,\sigma^2)}, for some \mu and \sigma^2. Therefore, a natural idea can be to test H_0:X_i\sim\mathcal{N(\hat\mu,\hat\sigma^2)}, for some estimates of \mu and \sigma^2. That’s the idea of Lilliefors test. More specifically, Lilliefors test suggests to use , Kolmogorov–Smirnov statistics, but corrects the p-value. Indeed, if we draw many samples, and use Kolmogorov–Smirnov statistics and its classical p-value to test for H_0:X_i\sim\mathcal{N(\hat\mu,\hat\sigma^2)},

n=300
p = rep(NA,1e5)
for(s in 1:1e5){
X = rnorm(n,0,1)
p[s] = ks.test(X,"pnorm",mean(X),sd(X))$p.value
}

we see clearly that the distribution of p-values is no longer uniform

fit.dist = fitdist(p,"beta")
hist(p,probability = TRUE,main="",xlab="",ylab="")
vu = seq(0,1,by=.01)
vv = dbeta(vu,shape1 = fit.dist$estimate[1], shape2 = fit.dist$estimate[2])
lines(vu,vv,col="dark red", lwd=2)

More specifically, if x_i‘s are actually drawn from some Gaussian distribution, there are no chance to reject H_0, the p-value being almost never below 5%

mean(p&lt;=.05)
[1] 0.00012

Usually, to interpret that result, the heuristics is that \hat\mu and \hat\sigma^2 are both based on the sample, while previously 0 and 1 where based on some prior knowledge. Somehow, it reminded me on the classical problem when mention when we introduce cross-validation, which is Goodhart’s law

When a measure becomes a target, it ceases to be a good measure

i.e. we cannot assess goodness of fit using the same data as the ones used to estimate parameters. So here, why not use some hold-out (or cross-validation) procedure : split the dataset in two parts, \{x_1,\cdots,x_k\} (with k<n) to estimate parameters \mu and \sigma^2 and then use \{x_{k+1},\cdots,x_n\} and Kolmogorov–Smirnov statistics on it to test if x_i‘s are drawn from some Gaussian distribution. More precisely, will the p-value computed using the standard Kolmogorov–Smirnov procedure be ok here. Here, I tried two scenarios, k/n being either 1/3 or 2/3,

p = matrix(NA,1e5,4)
for(s in 1:1e5){
X = rnorm(n,0,1)
p[s,1] = ks.test(X,"pnorm",0,1)$p.value
p[s,2] = ks.test(X,"pnorm",mean(X),sd(X))$p.value
p[s,3] = ks.test(X[1:200],"pnorm",mean(X[201:300]),sd(X[201:300]))$p.value
p[s,4] = ks.test(X[201:300],"pnorm",mean(X[1:200]),sd(X[1:200]))$p.value
}

Again, we can visualize the distributions of p-values,  in the case where 1/3 of the data is used to estimate \mu and \sigma^2, and 2/3 of the data is used to test

fit.dist = fitdist(p[,3],"beta")
hist(p[,3],probability = TRUE,main="",xlab="",ylab="")
vu=seq(0,1,by=.01)
vv=dbeta(vu,shape1 = fit.dist$estimate[1], shape2 = fit.dist$estimate[2])
lines(vu,vv,col="dark red", lwd=2)


and in the case where 2/3 of the data is used to estimate \mu and \sigma^2, and 1/3 of the data is used to test

fit.dist = fitdist(p[,4],"beta")
hist(p[,4],probability = TRUE,main="",xlab="",ylab="")
vu=seq(0,1,by=.01)
vv=dbeta(vu,shape1 = fit.dist$estimate[1], shape2 = fit.dist$estimate[2])
lines(vu,vv,col="dark red", lwd=2)


Observe here that we (wrongly) reject too frequently H_0, since the p-values are  below 5% in 25% of the scenarios, in the first case (less data used to estimate), and 9% of the scenarios, in the second case (less data used to test)

mean(p[,3]&lt;=.05)
[1] 0.24168
mean(p[,4]&lt;=.05)
[1] 0.09334

We can actually compute that probability as a function of k/n

n=300
p = matrix(NA,1e4,99)
for(s in 1:1e4){
  X = rnorm(n,0,1)
  KS = function(p) ks.test(X[1:(p*n)],"pnorm",mean(X[(p*n+1):n]),sd(X[(p*n+1):n]))$p.value
  p[s,] = Vectorize(KS)((1:99)/100)
}

The evolution of the probability is the following

prob5pc = apply(p,2,function(x) mean(x&lt;=.05))
plot((1:99)/100,prob5pc)

so, it looks like we can use some sort of hold-out procedure to test for H_0:X_i\sim\mathcal{N(\mu,\sigma^2)}, for some \mu and \sigma^2, using Kolmogorov–Smirnov test with \mu=\hat\mu and \sigma^2=\hat\sigma^2 but the proportion of data used to estimate those quantities should be (much) larger that the one used to compute the statistics. Otherwise, we clearly reject too frequently H_0.

Insurance Pricing Game

Would you like to put your data science skills to the test?

Imperial College London, Universite du Quebec à Montreal (UQAM), and actuarial institutes in Singapore, the UK, including the IFoA, and Australia, ASTIN, the Casualty Actuarial Society are co-organising a global data science competition.

Would you like to accurately predict the cost of insurance by putting your data science skills to the test? We are hosting two competitions with separate datasets, a loss prediction competition on Kaggle with synthetic workers’ compensation data, and a pricing competition in a simulated market hosted on AI Crowd with real-world motor insurance contracts. Codes can be either in R or python. The competition is being sponsored by a number of different organisations, with a total of US$12,000 in cash prizes to be won. For more information about how to take part please visit www.pricing-game.com

Trees and forests

For my ACT6100 weekly quiz, I usually generate some datasets, and then ask students to compare various predictive algorithms. Last week, it was about classification trees and random forests. And students were surprised to have such differences (they had to estimate the probability to have a specific label, for the barycenter of the covariates).

Usually, I use the following to generate some (here 12) covariates that could be correlated

library(FactoMineR)
n=279
library(clusterGeneration)
library(mnormt)
k=12
S=genPositiveDefMat("unifcorrmat",dim=k)
X=round(rmnorm(n,varcov=S$Sigma)+8,2)
rownames(X)=1:n
colnames(X)=LETTERS[1:k]

Then I need to generate some data, based on some covariates (5 out of 12), with various strengths

idx = sample(1:k,size=5)
u = sample(c(-(4:1),1:4),5)
beta = rep(0,k)
beta[idx] = u
U = X%*%beta
U = U-min(U)
U = U/max(U)*6-3
p = exp(( U))/(1+exp((U )))
Y = rbinom(n,size=1,prob=p)
df = data.frame(Y=as.factor(Y),X)
levels(df$Y)=levels=c("blue","red")

We can run a classification tree

library(rpart)
arbre = rpart(Y~., data=df)

and a random forest,

library(randomForest)
set.seed(1)
arbres = randomForest(Y~., data=df)

Here are the partial plots for 4 of the explanatory variables that actually have an impact

partialPlot(arbres,pred.data = df, x.var = "A")


Predictions for the “average” point of the dataset is here

(parbre = predict(arbre,newdata=data.frame(t(apply(df[,-1],2,mean))),type = "prob"))
       blue       red
1 0.8064516 0.1935484
(parbres = predict(arbres,newdata=data.frame(t(apply(df[,-1],2,mean))),type = "prob"))
   blue   red
1 0.422 0.578
attr(,"class")
[1] "matrix" "votes"

and there is a substantial difference, with a probability of 19% with a single tree, 58% with 500 trees (the default value of the function).

To understand why we can have such a difference, we should not only focus on the bagging stratgy, but look at the variability of the predictions, obtained with trees,

B=1e4
parbres = rep(NA,B)
m=data.frame(t(apply(df[,-1],2,mean)))
for(b in 1:B){
  idx = sample(1:nrow(df),size=nrow(df),replace=TRUE)
  arbre = rpart(Y~., data=df[idx,])
  parbres[b] = predict(arbre,newdata=m,type = "prob")[2]
}
hist(parbres)

Surprisingly, we have here a bimodal function for \hat{y} which is either very small for some trees, of very large for others. On average, we have a value close to 55%… I think I will use more that generative algorithm for future quiz…

Sharing pictures from holidays in the Canadian Rockies (with R)

My kids have a very popular blog (at least among their grandmothers) where they frequently post pictures from everyday’s life (since they live 5000km from them), as well as pictures taken from holidays. This afternoon, I tried to used the popupImage function from the leaflet package to post pictures, on a map (to explain where we spent our holiday this summer). This post is just to keep tracks of that code.

First, we need to load the appropriate R packages

library(leaflet)
library(mapview)

Then, we take a picture, and we locate it, for instance Mirror Lake (on the trail to Lake Agnes). Since leaflet uses openstreetmap, I recommend to use it also for location (and not google maps… coordinates can be slightly different)

df=data.frame(lat =51.41603, long=-116.23946,
nom = "Miror Lake",photo="http://freakonometrics.free.fr/jaspeR/_DSC5967.jpg")

I guess you can also use the metadata if you take pictures with a cell phone, and you add the location… but I am (very) old fashioned, and still use a camera to take pictures. Then you can add a dozen pictures

df=rbind(df, data.frame(lat =51.4164, long=-116.2442,
nom = "Lake Agnes",photo="http://freakonometrics.free.fr/jaspeR/_DSC6003.jpg"))
df=rbind(df, data.frame(lat =51.3215642,long=-116.193718,
nom="Moraine Lake",photo="http://freakonometrics.free.fr/jaspeR/_DSC5957.jpg"))

From that dataframe, we need two kinds of information: the location, and the url of the picture,

data_df=df[,c("lat","long")]
images = as.character(df$photo)

Then we can create the leaflet map (sorry for typos, but wordpress converts the > symbol into some “&gt;” characters… which makes R pipe operator hard to read)

m = leaflet(data_df) %&gt;%
  addTiles() %&gt;%
  addCircleMarkers(
    fillOpacity = 0.8, radius = 5,
    lng = ~long, lat =~lat, 
    popup = popupImage(images)
  )

and export it (in a nice html file)

library(htmlwidgets)
saveWidget(m, file="jaspR.html")

Regression discontinuity model for TV series

In September, we are usually happy to see our favorite TV series back on air… Or not? Because, admit it, if we are happy to see those characters back, most of the time, we are disappointed, too. So why not look at the data, to confirm this feeling? Nazareno Andrade shared some nice codes to get IMDB ratings in a nice csv file (you can either use the large csv file, or run your own codes)

download.file("https://github.com/nazareno/imdb-series/raw/master/data/series_from_imdb.csv",
destfile="series_from_imdb.csv")
base = read.csv("series_from_imdb.csv")

It is a large dataset, with more than 64,000 episodes of almost 890 TV series,

str(base)
'data.frame':	64018 obs. of  18 variables:
 $ series_name: Factor w/ 889 levels "'Allo 'Allo!",..: 137 137 137 137 137 137 137 137 137 137 ...
 $ episode    : Factor w/ 54090 levels "-30-","¡Viva los muertos!",..: 32314 7446 16 7176 17748 9562 1379 36218 17845 5553 ...
 $ series_ep  : int  1 2 3 4 5 6 7 8 9 10 ...
 $ season     : int  1 1 1 1 1 1 1 2 2 2 ...
 $ season_ep  : int  1 2 3 4 5 6 7 1 2 3 ...
 $ user_rating: num  8.9 8.7 8.7 8.2 8.3 9.2 8.8 8.7 9.2 8.3 ...

Just pick a TV series, for instance Dan Harmon’s Community,

sbase = base[base$series_name=="Community",]

We can plot the evolution of the rating over the 110 episodes.

sbase=sbase[!duplicated(sbase[,c(1,2,4,5)]),]
sbase$series_ep=1:nrow(sbase)

()since there could be some problem with the data (such as duplicates, let us clean it quickly)

plot(sbase$series_ep,sbase$UserRating,xlab=sbase$series_name[1])
idx=c(0,which(diff(sbase$season)!=0),nrow(sbase))
abline(v=idx+.5,lty=2,col=colr[2])
a = unique(sbase$season)
for(u in a){
  ssbase = sbase[sbase$season==u,]
  reg = lm(UserRating~series_ep,data=ssbase)
  lines(ssbase$series_ep,predict(reg),col=colr[3],lwd=2)
}

The vertical lines are here to visualize the seasons. On issue is that the lenght can vary with time. Consider Linwood Boomer’s Malcom in The Middle,

sbase = base[base$series_name=="Malcolm in the Middle",]

or Craig Thomas and Carter Bays’s How I Met Your Mother,

sbase = base[base$series_name=="How I Met Your Mother",]

On those two, the evolution is rather stable. Look at AMC’s The Walking Dead,

sbase = base[base$series_name=="The Walking Dead",]

Now, look at Howard Gordon and Alex Gansa’s Homeland,

sbase = base[base$series_name=="Homeland",]

There is an issue here with the last episode of season4, “Long Time Coming“, that has a very poor rating. If we remove that point, we get the thin line. Note that the regression line is always increasing. For Michael Hirst’s Vickings, we have

sbase = base[base$series_name=="Vicking",]

If we look more carefully on the previous graph, for five seasons (out of six), we have a positive slope. Well, to be honest, it is not significantly positive most of the time, but still. Out of 80 shows, and a total of 583 seasons, the slope is postive 75% of the time (433) and negative 25% of the time (150).

BASE = NULL
L80 = unique(base$series_name)
for(j in 1:length(L)){
sbase=base[base$series_name==L[j],]
sbase=sbase[!duplicated(sbase[,c(1,2,4,5)]),]
sbase=sbase[sbase$season&gt;0,]
sbase$series_ep=1:nrow(sbase)
a=unique(sbase$season)
a=a[!is.na(a)]
for(u in a){
  ssbase=sbase[sbase$season==u,]
  reg=lm(UserRating~series_ep,data=ssbase)
  pente = NA
  if((!is.na(coefficients(reg)[2]))&amp;(!is.na((summary(reg)$coefficients[2,4])))){
  if((summary(reg)$coefficients[2,4]&lt;.05)&amp;(coefficients(reg)[2]&gt;0)) pente="positive"
  if((summary(reg)$coefficients[2,4]&lt;.05)&amp;(coefficients(reg)[2]&lt;0)) pente="negative" sdf=data.frame(nom=sbase$series_name[1],season=u,slope=coefficients(reg)[2],inf=confint(reg)[2,1],sup=confint(reg)[2,2],signe=pente) BASE=rbind(BASE,sdf)} }} str(BASE) 'data.frame': 583 obs. of 6 variables: $ nom : Factor w/ 80 levels "Friends","Game of Thrones",..: 1 1 1 1 1 1 1 1 1 1 ... 
 
mean(BASE$slope&gt;0)
[1] 0.7427101
table(BASE$signe)
negative positive 
      15      144

Most of the time, the slope is not significant. To be more specific, 72% of the time, the slope is not significant. But when it is, 90% of the time, it is positive (144 seasons). Let us look at other TV series, for instance Joel Surnow and Robert Cochran’s 24,

sbase = base[base$series_name=="24",]

Álex Pina’s La Casa de Papel,

sbase = base[base$series_name=="La Casa de Papel",]

Steven Knight’s Peaky Blinders,

sbase = base[base$series_name=="Peaky Blinders",]

or David Simon’s The Wire,

sbase = base[base$series_name=="The Wire",]

The slope is increasing over almost all seasons. But a major drawback is that when we get back to our show, for a new season, we usually get disapointed. More specifically, we can quantify the difference in red below

that can be estimated using

sbase12 = sbase[sbase$season%in%c(a[ij],a[ij+1]),]
seuil = sbase12$series_ep[which(diff(sbase12$season)!=0)]+.5
s = function(x) (x-seuil)*(x&gt;seuil)
reg = lm(UserRating~series_ep+s(series_ep)+I(series_ep&gt;seuil),data=sbase12)

Here we have

summary(reg)
Coefficients:
                         Estimate Std. Error t value  Pr(|t|)    
(Intercept)               8.45000    0.16338  51.719    2e-16 ***
series_ep                 0.10000    0.03235   3.091 0.008598 ** 
s(series_ep)              0.02000    0.04218   0.474 0.643291    
I(series_ep)TRUE.        -1.01778    0.20486  -4.968 0.000257 ***

so the drop of 1 point (out of 10) cannot be claimed as being significant. That is the idea of regression discontinuity.

If we loop again over all our series, we have 485 pairs of consecutive seasons. As expected, in 75% of the casse, from season t-1 to season t, we observe a negative rupture. As previously, in 70% of the cases, it is not significat (with linear models before and after), and when it is significant, it is negative in 96% of the cases ! But an alternative can be to use nonparametric models, on both sides.

To illustrate, consider David Benioff and D. B. Weiss’s Game of Thrones,

sbase = base[base$series_name=="Game of Thrones",]

but let us remove the last season (no spoiler here, but clearly not worst watching)

Consider for instance the drop between season 1 and season 2,

library(rdd)
sbase12=sbase[sbase$season%in%c(1,2),]
lmr=RDestimate(UserRating~series_ep,data=sbase12,cutpoint=mean(range(sbase12$series_ep)))
plot(lmr)

This is very consistent with what we observed with our linear regressions actually,

seuil=10.5
s = function(x) (x-seuil)*(x&gt;seuil)
reg = lm(UserRating~series_ep+s(series_ep)+I(series_ep&gt;seuil),data=sbase12)
summary(reg)
 
Coefficients:
                         Estimate Std. Error t value  Pr(|t|)    
(Intercept)               8.70000    0.15458  56.281    2e-16 ***
series_ep                 0.07273    0.02491   2.919  0.01003 *  
s(series_ep)              0.01455    0.03523   0.413  0.68520    
I(series_ep)TRUE         -0.94000    0.20316  -4.627  0.00028 ***

Here, the drop of one point is significant…

So, your favorite show had an outstanding finale ? and you can’t wait to watch the new season… Well, statistically, it’s very likely that you will be disapointed by the first episode of the forthcoming season…

Testing for Covid-19 in the U.S.

For almost a month, on a daily basis, we are working with colleagues (Romuald, Chi and Mathieu) on modeling the dynamics of the recent pandemic. I learn of lot of things discussing with them, but we keep struggling with the tests. Paul, in Montréal, helped me a little bit, but I think we will still have to more to get a better understand. To but honest, we stuggle with two very simple questions

  • how many people are tested on a daily basis ?

Recently, I discovered Modelling COVID-19 exit strategies for policy makers in the United Kingdom, which is very close to what we try to do… and in the document two interesting scenarios are discussed, with, for the first one, “1 million ‘reliable’ daily tests are deployed” (in the U.K.) and “5 million ‘useless’ daily tests are deployed”. There are about 65 millions unhabitants in the U.K. so we talk here about 1.5% people tested, on a daily basis, or 7.69% people ! It could make sense, but our question was, at some point, is that realistic ? where are we today with testing ? In the U.S. https://covidtracking.com/ collects interesting data, on a daily basis, per state.

url = "https://raw.githubusercontent.com/COVID19Tracking/covid-tracking-data/master/data/states_daily_4pm_et.csv"
download.file(url,destfile="covid.csv")
base = read.csv("covid.csv")

Unfortunately, there is no information about the population. That we can find on wikipedia. But in that table, the state is given by its full name (and the symbol in the previous dataset). So we new also to match the two datasets properly,

url="https://en.wikipedia.org/wiki/List_of_states_and_territories_of_the_United_States_by_population"
download.file(url,destfile = "popUS.html")
#pas contaminé 2/3 R=3
library(XML)
tables=readHTMLTable("popUS.html")
T=tables[[1]][3:54,c("V3","V4")]
names(T)=c("state","pop")
url="https://en.wikipedia.org/wiki/List_of_U.S._state_abbreviations"
download.file(url,destfile = "nameUS.html")
tables=readHTMLTable("nameUS.html")
T2=tables[[1]][13:63,c(1,4)]
names(T2)=c("state","symbol")
T=merge(T,T2)
T$population = as.numeric(gsub(",", "", T$pop, fixed = TRUE))
names(base)[2]="symbol"
base = merge(base,T[,c("symbol","population")])

Now our dataset is fine… and we can get a function to plot the number of people tested in the U.S. (cumulated). Here, we distinguish between the positive and the negative,

drawing = function(st ="NY"){
sbase=base[base$symbol==st,c("date","positive","negative","population")]
sbase$DATE = as.Date(as.character(sbase$date),"%Y%m%d")
sbase=sbase[order(sbase$DATE),]
par(mfrow=c(1,2))
plot(sbase$DATE,(sbase$positive+sbase$negative)/sbase$population,ylab="Proportion Test (/population of state)",type="l",xlab="",col="blue",lwd=3)
lines(sbase$DATE,sbase$positive/sbase$population,col="red",lwd=2)
legend("topleft",c("negative","positive"),lwd=2,col=c("blue","red"),bty="n")
title(st)
plot(sbase$DATE,sbase$positive/(sbase$positive+sbase$negative),ylab="Ratio of positive tests",ylim=c(0,1),type="l",xlab="",col="black",lwd=3)
title(st)}

Let us start with New York

drawing("NY")

As at now, 4% of the entiere population got tested… over 6 weeks…. The graph on the right is the proportion of people who tested positive… I won’t get back on that one here today, I keep it for our work. In New Jersey, we got about 2.5% of the entiere population tested, overall,

drawing("NJ")

Let us try a last one, Florida

drawing("FL")

As at today, it is 1.5% of the population, over 6 weeks. Overall, in the U.S. less than 0.1% people are tested, on a daily basis. Which is far from the 1.5% in the U.K. scenarios. Now, here come the second question,

  • what are we actually testing for ?

On that one, my experience in biology is… very limited, and Paul helped me. He mentioned this morning a nice report, from a lab in UC Berkeley

One of my question was for instance, if you get tested positive, and you do it again, can you test negative ? Or, in the context of our data, do we test different people ? are some people tested on a regular basis (perhaps every week) ? For instance, with antigen tests (Reverse Transcription Quantitative Polymerase Chain Reaction (RT-qPCR) – also called molecular or PCR – Polymerase Chain Reaction – test) we test if someone is infectious, while with antibody test (using serological immunoassays that detect viral-specific antibodies — Immunoglobin M (IgM) and G (IgG) — also called serology test), we test for immunity. Which is rather different…

I have no idea what we have in our database, to be honest… and for the past six weeks, I have seen a lot of databases, and most of the time, I don’t know how to interpret, I don’t know what is measured… and it is scary. So, so far, we try to do some maths, to test dynamics by tuning parameters “the best we can” (and not estimate them). But if anyone has good references on testing, in the context of Covid-19 (for instance on specificity, sensitivity of all those tests) I would love to hear about it !

On the “correlation” between a continuous and a categorical variable

Let us get back on the Titanic dataset,

loc_fichier = "http://freakonometrics.free.fr/titanic.RData"
download.file(loc_fichier, "titanic.RData")
load("titanic.RData")
base = base[!is.na(base$Age),]

On consider two variables, the age x (the continuous one) and the survivor indicator y (the qualitative one)

X = base$Age
Y = base$Survived

It looks like the age might be a valid explanatory variable in the logistic regression,

summary(glm(Survived~Age,data=base,family=binomial))
 
Coefficients:
            Estimate Std. Error z value Pr(&gt;|z|)  
(Intercept) -0.05672    0.17358  -0.327   0.7438  
Age         -0.01096    0.00533  -2.057   0.0397 *
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
(Dispersion parameter for binomial family taken to be 1)
 
    Null deviance: 964.52  on 713  degrees of freedom
Residual deviance: 960.23  on 712  degrees of freedom
AIC: 964.23

The significance test here has a p-value just below 4%. Actually, one can relate it with the value of the deviance (the null deviance and the residual deviance). Recall thatD=2\big(\log\mathcal{L}(\boldsymbol{y})-\log\mathcal{L}(\widehat{\boldsymbol{\mu}})\big)whileD_0=2\big(\log\mathcal{L}(\boldsymbol{y})-\log\mathcal{L}(\overline{y})\big)Under the assumption that x is worthless, D_0-D tends to a \chi^2 distribution with 1 degree of freedom. And we can compute the p-value dof that likelihood ratio test,

1-pchisq(964.52-960.23,1)
[1] 0.03833717

(which is consistent with a Gaussian test). But if we consider a nonlinear transformation

summary(glm(Survived~bs(Age),data=base,family=binomial))
 
Coefficients:
            Estimate Std. Error z value Pr(&gt;|z|)    
(Intercept)   0.8648     0.3460   2.500 0.012433 *  
bs(Age)1     -3.6772     1.0458  -3.516 0.000438 ***
bs(Age)2      1.7430     1.1068   1.575 0.115299    
bs(Age)3     -3.9251     1.4544  -2.699 0.006961 ** 
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
(Dispersion parameter for binomial family taken to be 1)
 
    Null deviance: 964.52  on 713  degrees of freedom
Residual deviance: 948.69  on 710  degrees of freedom

which seems to be “more significant”

1-pchisq(964.52-948.69,3)
[1] 0.001228712

So it looks like the variable x is interesting here.

To visualize the non-null correlation, one can consider the condition distribution of x given y=1, and compare it with the condition distribution of x given y=0,

ks.test(X[Y==0],X[Y==1])
 
	Two-sample Kolmogorov-Smirnov test
 
data:  X[Y == 0] and X[Y == 1]
D = 0.088777, p-value = 0.1324
alternative hypothesis: two-sided

i.e. with a p-value above 10%, the two distributions are not significatly different.

F0 = function(x) mean(X[Y==0]&lt;=x)
F1 = function(x) mean(X[Y==1]&lt;=x)
vx = seq(0,80,by=.1)
vy0 = Vectorize(F0)(vx)
vy1 = Vectorize(F1)(vx)
plot(vx,vy0,col="red",type="s")
lines(vx,vy1,col="blue",type="s")

(we can also look at the density, but it looks like that there is not much to see)

An alternative is discretize variable x and to use Pearson’s independence test,

k=5
LV = quantile(X,(0:k)/k)
LV[1] = 0
Xc = cut(X,LV)
table(Xc,Y)
           Y
Xc           0  1
  (0,19]    85 79
  (19,25]   92 45
  (25,31.8] 77 50
  (31.8,41] 81 63
  (41,80]   89 53
chisq.test(table(Xc,Y))
 
	Pearson's Chi-squared test
 
data:  table(Xc, Y)
X-squared = 8.6155, df = 4, p-value = 0.07146

The p-value is here 7%, with five categories for the age. And actually, we can compare the p-value

pvalue = function(k=5){
LV = quantile(X,(0:k)/k)
LV[1] = 0
Xc = cut(X,LV)
chisq.test(table(Xc,Y))$p.value}
vk = 2:20
vp = Vectorize(pvalue)(vk)
plot(vk,vp,type="l")
abline(h=.05,col="red",lty=2)

which gives a p-value close to 5%, as soon as we have enough categories. In the slides of the course (STT5100), I claim that actually, the age is an important variable when trying to predict if a passenger survived. Test mentioned here are not as conclusive, nevertheless…

Modeling Pandemics (3)

In Statistical Inference in a Stochastic Epidemic SEIR Model with Control Intervention, a more complex model than the one we’ve seen yesterday was considered (and is called the SEIR model). Consider a population of size N, and assume that S is the number of susceptible, E the number of exposed, I the number of infectious, and R for the number recovered (or immune) individuals, \displaystyle{\begin{aligned}{\frac {dS}{dt}}&=-\beta {\frac {I}{N}}S\\[8pt]{\frac {dE}{dt}}&=\beta {\frac {I}{N}}S-aE\\[8pt]{\frac {dI}{dt}}&=aE-b I\\[8pt]{\frac {dR}{dt}}&=b I\end{aligned}}Between S and I, the transition rate is \beta I, where \beta is the average number of contacts per person per time, multiplied by the probability of disease transmission in a contact between a susceptible and an infectious subject. Between I and R, the transition rate is b (simply the rate of recovered or dead, that is, number of recovered or dead during a period of time divided by the total number of infected on that same period of time). And finally, the incubation period is a random variable with exponential distribution with parameter a, so that the average incubation period is a^{-1}.

Probably more interesting, Understanding the dynamics of ebola epidemics suggested a more complex model, with susceptible people S, exposed E, Infectious, but either in community I, or in hospitals H, some people who died F and finally those who either recover or are buried and therefore are no longer susceptible R.

Thus, the following dynamic model is considered\displaystyle{\begin{aligned}{\frac {dS}{dt}}&=-(\beta_II+\beta_HH+\beta_FF)\frac{S}{N}\\[8pt]\frac {dE}{dt}&=(\beta_II+\beta_HH+\beta_FF)\frac{S}{N}-\alpha E\\[8pt]\frac {dI}{dt}&=\alpha E+\theta\gamma_H I-(1-\theta)(1-\delta)\gamma_RI-(1-\theta)\delta\gamma_FI\\[8pt]\frac {dH}{dt}&=\theta\gamma_HI-\delta\lambda_FH-(1-\delta)\lambda_RH\\[8pt]\frac {dF}{dt}&=(1-\theta)(1-\delta)\gamma_RI+\delta\lambda_FH-\nu F\\[8pt]\frac {dR}{dt}&=(1-\theta)(1-\delta)\gamma_RI+(1-\delta)\lambda_FH+\nu F\end{aligned}}In that model, parameters are \alpha^{-1} is the (average) incubation period (7 days), \gamma_H^{-1} the onset to hospitalization (5 days), \gamma_F^{-1} the onset to death (9 days), \gamma_R^{-1} the onset to “recovery” (10 days), \lambda_F^{-1} the hospitalisation to death (4 days) while \lambda_R^{-1} is the hospitalisation to recovery (5 days), \eta^{-1} is the death to burial (2 days). Here, numbers are from Understanding the dynamics of ebola epidemics (in the context of ebola). The other parameters are \beta_I the transmission rate in community (0.588), \beta_H the transmission rate in hospital (0.794) and \beta_F the transmission rate at funeral (7.653). Thus

epsilon = 0.001 
Z = c(S = 1-epsilon, E = epsilon, I=0,H=0,F=0,R=0)
p=c(alpha=1/7*7, theta=0.81, delta=0.81, betai=0.588,
    betah=0.794, blambdaf=7.653,N=1, gammah=1/5*7,
    gammaf=1/9.6*7, gammar=1/10*7, lambdaf=1/4.6*7,
    lambdar=1/5*7, nu=1/2*7)

If \boldsymbol{Z}=(S,E,I,H,F,R), if we write \frac{\partial \boldsymbol{Z}}{\partial t} = SEIHFR(\boldsymbol{Z})where SEIHFR is

SEIHFR = function(t,Z,p){
  S=Z[1]; E=Z[2]; I=Z[3]; H=Z[4]; F=Z[5]; R=Z[6]
  alpha=p["alpha"]; theta=p["theta"]; delta=p["delta"]
  betai=p["betai"]; betah=p["betah"]; gammah=p["gammah"]
  gammaf=p["gammaf"]; gammar=p["gammar"]; lambdaf=p["lambdaf"]
  lambdar=p["lambdar"]; nu=p["nu"]; blambdaf=p["blambdaf"]
  N=S+E+I+H+F+R
  dS=-(betai*I+betah*H+blambdaf*F)*S/N
  dE=(betai*I+betah*H+blambdaf*F)*S/N-alpha*E
  dI=alpha*E-theta*gammah*I-(1-theta)*(1-delta)*gammar*I-(1-theta)*delta*gammaf*I
  dH=theta*gammah*I-delta*lambdaf*H-(1-delta)*lambdaf*H
  dF=(1-theta)*(1-delta)*gammar*I+delta*lambdaf*H-nu*F
  dR=(1-theta)*(1-delta)*gammar*I+(1-delta)*lambdar*H+nu*F
  dZ=c(dS,dE,dI,dH,dF,dR)
  list(dZ)}

We can solve it, or at least study the dynamics from some starting values

library(deSolve)
times = seq(0, 50, by = .1)
resol = ode(y=Z, times=times, func=SEIHFR, parms=p)

For instance, the proportion of people infected is the following

plot(resol[,"time"],resol[,"I"],type="l",xlab="time",ylab="",col="red")
lines(resol[,"time"],resol[,"H"],col="blue")

Modeling pandemics (2)

When introducing the SIR model, in our initial post, we got an ordinary differential equation, but we did not really discuss stability, and periodicity. It has to do with the Jacobian matrix of the system. But first of all, we had three equations for three function, but actually\displaystyle{{\frac{dS}{dt}}+{\frac {dI}{dt}}+{\frac {dR}{dt}}=0}so it means that our problem is here simply in dimension 2. Hence\displaystyle {\begin{aligned}&X={\frac {dS}{dt}}=\mu(N-S)-{\frac {\beta IS}{N}},\\[6pt]&Y={\frac {dI}{dt}}={\frac {\beta IS}{N}}-(\mu+\gamma)I\end{aligned}}and therefore, the Jacobian of the system is\begin{pmatrix}\displaystyle{\frac{\partial X}{\partial S}}&\displaystyle{\frac{\partial X}{\partial I}}\\[9pt]\displaystyle{\frac{\partial Y}{\partial S}}&\displaystyle{\frac{\partial Y}{\partial I}}\end{pmatrix}=\begin{pmatrix}\displaystyle{-\mu-\beta\frac{I}{N}}&\displaystyle{-\beta\frac{S}{N}}\\[9pt]\displaystyle{\beta\frac{I}{N}}&\displaystyle{\beta\frac{S}{N}-(\mu+\gamma)}\end{pmatrix}We should evaluate the Jacobian at the equilibrium, i.e. S^\star=\frac{\gamma+\mu}{\beta}=\frac{1}{R_0}andI^\star=\frac{\mu(R_0-1)}{\beta}We should then look at eigenvalues of the matrix.

Our very last example was

times = seq(0, 100, by=.1)
p = c(mu = 1/100, N = 1, beta = 50, gamma = 10)
start_SIR = c(S=0.19, I=0.01, R = 0.8)
resol = ode(y=start_SIR, t=times, func=SIR, p=p)
plot(resol[,"time"],resol[,"I"],type="l",xlab="time",ylab="")

We can compute values at the equilibrium

mu=p["mu"]; beta=p["beta"]; gamma=p["gamma"]
N=1
S = (gamma + mu)/beta
I = mu * (beta/(gamma + mu) - 1)/beta

and the Jacobian matrix

J=matrix(c(-(mu + beta * I/N),-(beta * S/N),
         beta * I/N,beta * S/N - (mu + gamma)),2,2,byrow = TRUE)

Now, if we look at the eigenvalues,

eigen(J)$values
[1] -0.024975+0.6318831i -0.024975-0.6318831i

or more precisely 2\pi/b where a\pm ib are the conjuguate eigenvalues

2 * pi/(Im(eigen(J)$values[1]))
[1] 9.943588

we have a damping period of 10 time lengths (10 days, or 10 weeks), which is more or less what we’ve seen above,

The graph above was obtained using

p = c(mu = 1/100, N = 1, beta = 50, gamma = 10)
start_SIR = c(S=0.19, I=0.01, R = 0.8)
resol = ode(y=start_SIR, t=times, func=SIR, p=p)
plot(resol[1:1e5,"time"],resol[1:1e5,"I"],type="l",xlab="time",ylab="",lwd=3,col="red")
yi=resol[,"I"]
dyi=diff(yi)
i=which((dyi[2:length(dyi)]*dyi[1:(length(dyi)-1)])&lt;0)
t=resol[i,"time"]
arrows(t[2],.008,t[4],.008,length=.1,code=3)

If we look carefully. at the begining, the duration is (much) longer than 10 (about 13)… but it does converge towards 9.94

plot(diff(t[seq(2,40,by=2)]),type="b")
abline(h=2 * pi/(Im(eigen(J)$values[1]))

So here, theoretically, every 10 weeks (assuming that our time length is a week), we should observe an outbreak, smaller than the previous one. In practice, initially it is every 13 or 12 weeks, but the time to wait between outbreaks decreases (until it reaches 10 weeks).

Modeling pandemics (1)

The most popular model to model epidemics is the so-called SIR model – or Kermack-McKendrick. Consider a population of size N, and assume that S is the number of susceptible, I the number of infectious, and R for the number recovered (or immune) individuals, \displaystyle {\begin{aligned}&{\frac {dS}{dt}}=-{\frac {\beta IS}{N}},\\[6pt]&{\frac {dI}{dt}}={\frac {\beta IS}{N}}-\gamma I,\\[6pt]&{\frac {dR}{dt}}=\gamma I,\end{aligned}}so that \displaystyle{{\frac{dS}{dt}}+{\frac {dI}{dt}}+{\frac {dR}{dt}}=0}which implies that S+I+R=N. In order to be more realistic, consider some (constant) birth rate \mu, so that the model becomes\displaystyle {\begin{aligned}&{\frac {dS}{dt}}=\mu(N-S)-{\frac {\beta IS}{N}},\\[6pt]&{\frac {dI}{dt}}={\frac {\beta IS}{N}}-(\gamma+\mu) I,\\[6pt]&{\frac {dR}{dt}}=\gamma I-\mu R,\end{aligned}}Note, in this model, that people get sick (infected) but they do not die, they recover. So here, we can model chickenpox, for instance, not SARS.

The dynamics of the infectious class depends on the following ratio:\displaystyle{R_{0}={\frac {\beta }{\gamma +\mu}}} which is the so-called basic reproduction number (or reproductive ratio). The effective reproductive ratio is R_0S/N, and the turnover of the epidemic happens exactly when R_0S/N=1, or when the fraction of remaining susceptibles is R_0^{-1}. As shown in Directly transmitted infectious diseases:Control by vaccination, if S/N<R_0^{-1} the disease (the number of people infected) will start to decrease.

Want to see it  ? Start with

mu = 0
beta = 2
gamma = 1/2

for the parameters. Here,  R_0=4. We also need starting values

epsilon = .001
N = 1
S = 1-epsilon
I = epsilon
R = 0

Then use the ordinary differential equation solver, in R. The idea is to say that \boldsymbol{Z}=(S,I,R) and we have the gradient \frac{\partial \boldsymbol{Z}}{\partial t} = SIR(\boldsymbol{Z})where SIR is function of the various parameters. Hence, set

p = c(mu = 0, N = 1, beta = 2, gamma = 1/2)
start_SIR = c(S = 1-epsilon, I = epsilon, R = 0)

The we must define the time, and the function that returns the gradient,

times = seq(0, 10, by = .1)
SIR = function(t,Z,p){
S=Z[1]; I=Z[2]; R=Z[3]; N=S+I+R
mu=p["mu"]; beta=p["beta"]; gamma=p["gamma"]
dS=mu*(N-S)-beta*S*I/N
dI=beta*S*I/N-(mu+gamma)*I
dR=gamma*I-mu*R
dZ=c(dS,dI,dR)
return(list(dZ))}

To solve this problem use

library(deSolve)
resol = ode(y=start_SIR, times=times, func=SIR, parms=p)

We can visualize the dynamics below

par(mfrow=c(1,2))
t=resol[,"time"]
plot(t,resol[,"S"],type="l",xlab="time",ylab="")
lines(t,resol[,"I"],col="red")
lines(t,resol[,"R"],col="blue")
plot(t,t*0+1,type="l",xlab="time",ylab="",ylim=0:1)
polygon(c(t,rev(t)),c(resol[,"R"],rep(0,nrow(resol))),col="blue")
polygon(c(t,rev(t)),c(resol[,"R"]+resol[,"I"],rev(resol[,"R"])),col="red")

We can actually also visualize the effective reproductive number is R_0S/N, where

R0=p["beta"]/(p["gamma"]+p["mu"])

The effective reproductive number is on the left, and as we mentioned above, when we reach 1, we actually reach the maximum of the infected,

plot(t,resol[,"S"]*R0,type="l",xlab="time",ylab="")
abline(h=1,lty=2,col="red")
abline(v=max(t[resol[,"S"]*R0&gt;=1]),col="darkgreen")
points(max(t[resol[,"S"]*R0&gt;=1]),1,pch=19)
plot(t,resol[,"S"],type="l",xlab="time",ylab="",col="grey")
lines(t,resol[,"I"],col="red",lwd=3)
lines(t,resol[,"R"],col="light blue")
abline(v=max(t[resol[,"S"]*R0&gt;=1]),col="darkgreen")
points(max(t[resol[,"S"]*R0&gt;=1]),max(resol[,"I"]),pch=19)

And when adding a \mu parameter, we can obtain some interesting dynamics on the number of infected,

times = seq(0, 100, by=.1)
p = c(mu = 1/100, N = 1, beta = 50, gamma = 10)
start_SIR = c(S=0.19, I=0.01, R = 0.8)
resol = ode(y=start_SIR, t=times, func=SIR, p=p)
plot(resol[,"time"],resol[,"I"],type="l",xlab="time",ylab="")

Function basis and regression

In the first part of the course on linear models, we’ve seen how to construct a linear model when the vector of covariates \boldsymbol{x} is given, so that \mathbb{E}(Y|\boldsymbol{X}=\boldsymbol{x}) is either simply \boldsymbol{x}^\top\boldsymbol{\beta} (for standard linear models) or a functional of \boldsymbol{x}^\top\boldsymbol{\beta} (in GLMs). But more generally, we can consider transformations of the covariates, so that a linear model can be used. In a very general setting, consider \sum_{j=1}^m\beta_j h_j(\boldsymbol{x})with h_j:\mathbb{R}^p\rightarrow\mathbb{R}. The standard linear model is obtained when m=p and h_j(\boldsymbol{x})=x_j , but of course, much more general models can be obtained, for instance with h_k(\boldsymbol{x})=x_j^2 or h_k(\boldsymbol{x})=x_{j}x_{j'}, that could be used to achieve high-order Taylor expansions. In that case, we will obtain the polynomial regression, that we will discuss first. We might also think of piecewise constant functions, h_k(\boldsymbol{x})=\boldsymbol{1}(x_j\in [a,b]) , that could be related to regression trees (but that is not in the scope in the STT5100 course). And if we go on step futher, we might think of piecewise linear or piecewise polynomial function, possibly with additional continuity constraints, that will lead us to spline basis.

  • Polynomial regression

For pedagogical purpose, when I talk about polynomial regression, I always have in mind (in the univariate case) y=\beta_0+\beta_1x+\beta_2x^2+\cdots+\beta_kx^k+\varepsilonbut if we use

lm(y~poly(x,k))

in R, the output is not the \beta_j‘s.

As discussed in Kennedy & Gentle (1980) Statistical Computing,

Recall that orthogonal polynomials are defined with respect to the classical inner-product (on the finite interval (a,b)){\displaystyle \langle f,g\rangle =\int _{a}^{b}f(x)g(x)~\mathrm {d} x} And a sequence of orthogonal polynomials is (P_n) where P_n is a polynomial of degree n, for all n, and such that P_m\perp P_n for all m\neq n. Note that those polyomials are orthogonal with respect to the inner product defined above, i.e. given some finite interval (a,b). But if (a,b) changes, the polynomials will be different.

A popular family of orthogonal polynomial, on finite interval (-1,+1) is the family of Legendre polynomials, satisfying{\displaystyle \int _{-1}^{1}P_{m}(x)P_{n}(x)~\mathrm {d} x=0}as soon as m\neq n. Those polynomials satisfy Bonnet’s recursion formula{\displaystyle (n+1)P_{n+1}(x)=(2n+1)xP_{n}(x)-nP_{n-1}(x)} or Rodrigues’ formula {\displaystyle P_{n}(x)={\frac {1}{2^{n}n!}}{\frac {d^{n}}{dx^{n}}}(x^{2}-1)^{n}}The first values are here{\displaystyle P_{0}(x)=1} {\displaystyle P_{1}(x)=x}{\displaystyle P_{2}(x)={\frac {3x^{2}-1}{2}}}{\displaystyle P_{3}(x)={\frac {5x^{3}-3x}{2}}} {\displaystyle P_{4}(x)={\frac {35x^{4}-30x^{2}+3}{8}}}

Interestingly, we can get those polynomial functions using

library(orthopolynom)
(leg4coef = legendre.polynomials(n=4))
[[1]]
1 
 
[[2]]
x 
 
[[3]]
-0.5 + 1.5*x^2 
 
[[4]]
-1.5*x + 2.5*x^3 
 
[[5]]
0.375 - 3.75*x^2 + 4.375*x^4

Of course, there are many families of orthogonal polynomials (Jacobi polynomials, Laguerre polynomials, Hermite polynomials, etc). Now, in R, there is the standard poly function, that we use in polynomial regression.

x = seq(-1,1,length=101)
y = poly(x,4)
y
                   1            2             3            4
  [1,] -1.706475e-01  0.215984813 -2.480753e-01  0.270362873
  [2,] -1.672345e-01  0.203025724 -2.183063e-01  0.216290298
...
[100,]  1.672345e-01  0.203025724  2.183063e-01  0.216290298
[101,]  1.706475e-01  0.215984813  2.480753e-01  0.270362873
attr(,"coefs")
attr(,"coefs")$alpha
[1] 3.157229e-17 2.655145e-16 9.799244e-17 5.368224e-16
 
attr(,"coefs")$norm2
[1]   1.0000000 101.0000000  34.3400000   9.3377328   2.4472330   0.6330176
 
attr(,"degree")
[1] 1 2 3 4
attr(,"class")
[1] "poly"   "matrix"

But these are not Legendre polynomials… As explained in 李哲源‘s post on stackoverflow, the idea is to start with P_{-1}(x)=0, P_{0}(x)=1 and P_{1}(x)=x, and then define \ell_n=\langle P_n,P_n\rangle  as well as \alpha_n=\langle P_nP_1,P_1\rangle/\ell_n=\langle P_n^2,P_1\rangle/\ell_i= and \beta_n=\ell_n/\ell_{n-1}. Finally, define recursively{\displaystyle P_{n}(x)=(x-\alpha_{n-1})P_{n-1}(x)-\beta_{i-1}P_{i-2}(x)}and its normalized version, \tilde{P}_{n}=P_n/\sqrt{\ell_n}. That is what poly computes.

So, for pedagogical purpose, I said that I like to use y=\boldsymbol{x}^\top\boldsymbol{\beta}+\varepsilon where\boldsymbol{x}=(1,x,x^2,\cdots,xˆ{k-1},x^k)And actually, when using poly, we use the QR decomposition of that matrix. As discussed in in 李哲源‘s post, we can almost reproduce the poly function using

my_poly - function (x, degree = 1) {
    xbar = mean(x)
    x = x - xbar
    QR = qr(outer(x, 0:degree, "^"))
    X = qr.qy(QR, diag(diag(QR$qr), length(x), degree + 1))[, -1, drop = FALSE]
    X2 = X * X
    norm2 = colSums(X * X)   
    alpha = drop(crossprod(X2, x)) / norm2
    beta = norm2 / (c(length(x), norm2[-degree]))
    colnames(X) = 1:degree
    scale = sqrt(norm2)
    X = X * rep(1 / scale, each = length(x))
    X}

Nevertheless, the two models are equivalent. More precisely,

plot(cars)
reg1 = lm(dist~speed+I(speed^2)+I(speed^3),data=cars)
reg2 = lm(dist~poly(speed,3),data=cars)
u = seq(3,26,by=.1)
v1 = predict(reg1,newdata=data.frame(speed=u))
v2 = predict(reg2,newdata=data.frame(speed=u))
lines(u,v1,col="blue")
lines(u,v2,col="red",lty=2)

We have exactly the same prediction here

v1[u==15]
     121 
38.43919 
v2[u==15]
     121 
38.43919

And probably also quite interesting : the coefficients do not have the same interpretation (since we do not have the same basis), but the p-value for the highest degree is exactly the same here ! Here the two models reject, with the same confidence, the polynomial of degree three,

summary(reg1)
 
Coefficients:
             Estimate Std. Error t value Pr(&gt;|t|)
(Intercept) -19.50505   28.40530  -0.687    0.496
speed         6.80111    6.80113   1.000    0.323
I(speed^2)   -0.34966    0.49988  -0.699    0.488
I(speed^3)    0.01025    0.01130   0.907    0.369
 
Residual standard error: 15.2 on 46 degrees of freedom
Multiple R-squared:  0.6732,	Adjusted R-squared:  0.6519 
F-statistic: 31.58 on 3 and 46 DF,  p-value: 3.074e-11
 
summary(reg2)
 
Coefficients:
                Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)        42.98       2.15  19.988  &lt; 2e-16 ***
poly(speed, 3)1   145.55      15.21   9.573  1.6e-12 ***
poly(speed, 3)2    23.00      15.21   1.512    0.137    
poly(speed, 3)3    13.80      15.21   0.907    0.369    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 15.2 on 46 degrees of freedom
Multiple R-squared:  0.6732,	Adjusted R-squared:  0.6519 
F-statistic: 31.58 on 3 and 46 DF,  p-value: 3.074e-11
  • B-splines regression (and GAMs)

Splines are also important in regression models, especially when we start talking about Generalized Additive Models. See Perperoglou, Sauerbrei, Abrahamowicz & Schmid (2019) for a review. In the univariate case, I introduce (linear) splines through positive parts, in the sense thaty=\beta_0+\beta_1x+\beta_2(x-s_1)_++\cdots+\beta_k(x-s_{k-1})_++\varepsilonwhere (x-s)_+ equals 0 if x<s and x-s if x>s. Those functions are nice since they are continuous, so the model is continuous (the weighted sum of continuous functions is continuous). And we can go one step further, with y=\beta_0+\beta_1x+\beta_2x^2+\beta_3(x-s_1)^2_++\cdots+\beta_k(x-s_{k-2})^2_++\varepsilonwith quadratic splines, or y=\beta_0+\beta_1x+\beta_2x^2+\beta_3x^3+\beta_4(x-s_1)^3_++\cdots+\beta_k(x-s_{k-3})^3_++\varepsilonfor cubic splines. Interestingly, quadratic splines are not only continuous, but their first derivative is also continuous (and the second one for cubic splines). So the knot discontinuity is s_1,s_2,\cdots is now invisible…

I like those models since they are easy to interprete. For example, the simple model \beta_1 x+\beta_2(x-s)_+ is the following piecewise linear function, continuous, with a “rupture” at knot s.

Observe also the following interpretation: for small values of x, there is a linear increase, with slope \beta_1, and for lager values of x, there is a linear decrease, with slope \beta_1+\beta_2. Hence, \beta_2 is interpreted as a change of the slope.

Unfortunately, it is now what R is using when using the bs function in R, which are the standard B-splines. Just to visualize (I will skip the maths here), with R, we have

library(splines)
clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02")
x = seq(5,25,by=.25)
B = bs(x,knots=c(10,20),Boundary.knots=c(5,55),degre=1)
matplot(x,B,type="l",lty=1,lwd=2,col=clr6)
B=bs(x,knots=c(10,20),Boundary.knots=c(5,55),degre=2)
matplot(x,B,type="l",col=clr6,lty=1,lwd=2)

while the functions I mentioned were (more or less) the following

pos = function(x,s) (x-s)*(x&gt;s)
par(mfrow=c(1,2))
clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02")
x = seq(5,25,by=.25)
B = cbind(pos(x,5),pos(x,10),pos(x,20))
matplot(x,B,type="l",lty=1,lwd=2,col=clr6)
pos2 = function(x,s) (x-s)^2*(x&gt;s)
B = cbind(pos(x,5)*20,pos2(x,5),pos2(x,10),pos2(x,20))
matplot(x,B,type="l",col=clr6,lty=1,lwd=2)

And as for the polynomial regression, the two models are equivalent. For example

plot(cars)
reg1 = lm(dist~speed+pos(speed,10)+pos(speed,20),data=cars)
reg2 = lm(dist~bs(speed,degree=1,knots=c(10,20)),data=cars)
v1 = predict(reg1,newdata=data.frame(speed=u))
v2 = predict(reg2,newdata=data.frame(speed=u))
lines(u,v1,col="blue")
lines(u,v2,col="red",lty=2)

or more specifically

v1[u==15]
     121 
39.35747 
v2[u==15]
     121 
39.35747

So one more time, the two models are equivalent, but I still find the approach with the positive part more intuitive, and easy to understand. As well as the interpretation of coefficients,

summary(reg1)
 
Coefficients:
               Estimate Std. Error t value Pr(&gt;|t|)  
(Intercept)     -7.6305    16.2941  -0.468   0.6418  
speed            3.0630     1.8238   1.679   0.0998 .
pos(speed, 10)   0.2087     2.2453   0.093   0.9263  
pos(speed, 20)   4.2812     2.2843   1.874   0.0673 .
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 15 on 46 degrees of freedom
Multiple R-squared:  0.6821,	Adjusted R-squared:  0.6613 
F-statistic: 32.89 on 3 and 46 DF,  p-value: 1.643e-11
 
summary(reg2)
 
Coefficients:
                                          Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)                                  4.621      9.344   0.495   0.6233    
bs(speed, degree = 1, knots = c(10, 20))1   18.378     10.943   1.679   0.0998 .  
bs(speed, degree = 1, knots = c(10, 20))2   51.094     10.040   5.089 6.51e-06 ***
bs(speed, degree = 1, knots = c(10, 20))3   88.859     12.047   7.376 2.49e-09 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 15 on 46 degrees of freedom
Multiple R-squared:  0.6821,	Adjusted R-squared:  0.6613 
F-statistic: 32.89 on 3 and 46 DF,  p-value: 1.643e-11

Here we can see directly that the first knot was not interesting (the slope did not change significantly) while the second one was…

Testing for a causal effect (with 2 time series)

A few days ago, I came back on a sentence I found (in a French newspaper), where someone was claiming that

“… an old variable explains 85% of the change in a new variable. So we can talk about causality”

and I tried to explain that it was just stupid : if we consider the regression of the temperature on day t+1 against the number of cyclist on day t, the R^2 exceeds 80%… but it is hard to claim that the number of cyclists on specific day will actually cause the temperature on the next day…

Nevertheless, that was frustrating, and I was wondering if there was a clever way to test for causality in that case. A popular one is Granger causality (I can mention a paper we published a few years ago where we use such a test, Tents, Tweets, and Events: The Interplay Between Ongoing Protests and Social Media). To explain that test, consider a bivariate time series (just like the one we have here), \boldsymbol{z}_t=(x_t,y_t), and consider some bivariate autoregressive model
{\displaystyle {\begin{bmatrix}x_{t}\\y_{t}\end{bmatrix}}={\begin{bmatrix}c_{1}\\c_{2}\end{bmatrix}}+{\begin{bmatrix}a_{1,1}&\textcolor{red}{a_{1,2}}\\\textcolor{blue}{a_{2,1}}&a_{2,2}\end{bmatrix}}{\begin{bmatrix}x_{t-1}\\y_{t-1}\end{bmatrix}}+{\begin{bmatrix}u_{t}\\v_{t}\end{bmatrix}}}where \boldsymbol{\varepsilon}_t=(u_t,v_t) is some bivariate white noise, in the sense that (i) {\displaystyle \mathbb{E} (\boldsymbol{\varepsilon}_{t})=\boldsymbol{0}} (the noise is centered) (ii) {\displaystyle \mathbb{E} (\boldsymbol{\varepsilon}_{t}\boldsymbol{\varepsilon}_{t}^\top)=\Omega } , so the variance matrix is constant, but possibly non-diagonal (iii) {\displaystyle \mathbb{E} (\boldsymbol{\varepsilon}_{t}\boldsymbol{\varepsilon}_{t-h}^\top)=\boldsymbol{0} } for all h\neq 0. Note that we can use the simplified expression{\displaystyle {\boldsymbol{z}_t=\boldsymbol{c}+\boldsymbol{A}\boldsymbol{z}_{t-1}+\boldsymbol{\varepsilon}_t}}Now, Granger test is based on several quantities. With off-diagonal terms of matrix \Omega, we have a so-called instantaneous causality, and since \Omega is symmetry, we will write x\leftrightarrow y. With off-diagonal terms of matrix \boldsymbol{A}, we have a so-called lagged causality, with either \textcolor{blue}{x\rightarrow y} or \textcolor{red}{x\leftarrow y} (and possibly both, if both terms are significant).

So I wanted to try on my two-variable problem.

df = read.csv("cyclistsTempHKI.csv")
dfts = cbind(C=ts(df$cyclists,start = c(2014, 1,2), frequency = 365),
             T=ts(df$meanTemp,start = c(2014, 1,2), frequency = 365))
library(vars)

I now have “time series” objects, and we can fit a VAR model,

var2 = VAR(dfts, p = 1, type = "const")
coefficients(var2)
$C
         Estimate   Std. Error   t value      Pr(&gt;|t|)
C.l1    0.8684009   0.02889424 30.054460 8.080226e-107
T.l1   70.3042012  20.07247411  3.502518  5.102094e-04
const 807.6394001 187.75472482  4.301566  2.110412e-05
 
$T
           Estimate   Std. Error   t value     Pr(&gt;|t|)
C.l1   0.0003865391 6.257596e-05  6.177118 1.540467e-09
T.l1   0.6611135594 4.347074e-02 15.208241 6.086394e-42
const -1.6413074565 4.066184e-01 -4.036481 6.446018e-05

For instant, we can run a causality, to test if the number of cyclists can cause the temperature (on the next day)

causality(var2, cause = "C")
$Granger
 
	Granger causality H0: C do not Granger-cause T
 
data:  VAR object var2
F-Test = 38.157, df1 = 1, df2 = 842, p-value = 1.015e-09

Here, we should clearly reject H_0, which is that there is no causal effect. Which is the way statisticians say that there should be some causal effect between the number of cyclist and the temperature…

So clearly, something is wrong here. Either it is some sort of superpower that cyclists are not aware of. Or this test that was used for forty years (Clive Granger even got a Nobel price for it) is not working. Or we missed something. Actually… I think we missed something here. Possibly because the series are not stationary. We can almost see it with

Phi = matrix(c(coefficients(var2)$C[1:2,1],coefficients(var2)$T[1:2,1]),2,2)
eigen(Phi)
eigen() decomposition
$values
[1] 0.9594810 0.5700335

where the highest eigenvalue is very close to one. But actually, we look here at the temperature…

plot(dfts)

so, at least, we should expect some seasonal unit root here. So let us use two techniques. The first one is a classical one-year difference, \Delta_{365}\boldsymbol{z}_t=\boldsymbol{z}_t-\boldsymbol{z}_{t-365}

var2 = VAR(diff(dfts,365), p = 1, type = "const")
coefficients(var2)
$C
          Estimate   Std. Error   t value     Pr(&gt;|t|)
C.l1     0.8376424   0.07259969 11.537823 1.993355e-16
T.l1    42.2638410  28.58783276  1.478386 1.449076e-01
const -507.5514795 219.40240747 -2.313336 2.440042e-02
 
$T
         Estimate   Std. Error   t value     Pr(&gt;|t|)
C.l1  0.000518209 0.0003277295 1.5812096 1.194623e-01
T.l1  0.598425288 0.1290511945 4.6371154 2.162476e-05
const 0.547828079 0.9904263469 0.5531235 5.823804e-01

The test on the fited VAR model yields

causality(var2, cause = "C") 
$Granger
 
	Granger causality H0: C do not Granger-cause T
 
data:  VAR object var2
F-Test = 2.5002, df1 = 1, df2 = 112, p-value = 0.1167

i.e., with a 11% p-value, we should reject the assumption that the number of cyclists cause the temperature (on the next day), and actually, we should also reject the other way

causality(var2, cause = "T") 
$Granger
 
	Granger causality H0: T do not Granger-cause C
 
data:  VAR object var2
F-Test = 2.1856, df1 = 1, df2 = 112, p-value = 0.1421

Nevertheless, if we look at the instantaneous causality, this one makes more sense

$Instant
 
	H0: No instantaneous causality between: T and C
 
data:  VAR object var2
Chi-squared = 13.081, df = 1, p-value = 0.0002982

The second idea would be to use a one day difference, \Delta_{1}\boldsymbol{z}_t=\boldsymbol{z}_t-\boldsymbol{z}_{t-1} and to fit a VAR model on that one

VARselect(diff(dfts,1), lag.max = 4, type="const")
$selection
AIC(n)  HQ(n)  SC(n) FPE(n) 
     3      3      2      3

but on that one, a VAR(1) model – with only one lag – might not be sufficient. It might be better to consider a VAR(3)

var2 = VAR(diff(dfts,1), p = 3, type = "const")

and on that one, one more time, we should reject the causal effect of the number of cyclists on the temperature (on the next day)

causality(var2, cause = "C")  
$Granger
 
	Granger causality H0: C do not Granger-cause T
 
data:  VAR object var2
F-Test = 0.67644, df1 = 3, df2 = 828, p-value = 0.5666

and this time, there could be a (lagged) causal effect of the temperature on the number of cyclists

causality(var2, cause = "T")  
$Granger
 
	Granger causality H0: T do not Granger-cause C
 
data:  VAR object var2
F-Test = 7.7981, df1 = 3, df2 = 828, p-value = 3.879e-05
 
$Instant
 
	H0: No instantaneous causality between: T and C
 
data:  VAR object var2
Chi-squared = 55.83, df = 1, p-value = 7.905e-14

but nothing instantaneously… So it looks like Granger causality performs well on that one !

Lasso Regression (home made)

Again, this post is related to my MAT7381 course, where we will see that it is actually possible to write our own code to compute Lasso regression, \min\left\lbrace\frac{1}{2}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|_{\ell_2}^2+\lambda\|\mathbf{\beta}\|_{\ell_1}\right\rbraceWe have to define the soft-thresholding functionS(z,\gamma)=\text{sign}(z)\cdot(|z|-\gamma)_+=\begin{cases}z-\gamma&\text{ if }\gamma>|z|\text{ and }z<0\\z+\gamma&\text{ if }\gamma<|z|\text{ and }z<0 \\0&\text{ if }\gamma\geq|z|\end{cases}The R function would be

soft_thresholding = function(x,a){
sign(x) * pmax(abs(x)-a,0)
}

To solve our optimization problem, set\mathbf{r}_j=\mathbf{y} - \left(\beta_0\mathbf{1}+\sum_{k\neq j}\beta_k\mathbf{x}_k\right)=\mathbf{y}-\widehat{\mathbf{y}}^{(j)}
so that the optimization problem can be written, equivalently
\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p [\mathbf{r}_j-\beta_j\mathbf{x}_j]^2+\lambda |\beta_j|\right\rbrace
hence\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p \beta_j^2\|\mathbf{x}_j\|-2\beta_j\mathbf{r}_j^T\mathbf{x}_j+\lambda |\beta_j|\right\rbrace
and one gets
\beta_{j,\lambda} = \frac{1}{\|\mathbf{x}_j\|^2}S(\mathbf{r}_j^T\mathbf{x}_j,n\lambda)
or, if we develop
\beta_{j,\lambda} = \frac{1}{\sum_i x_{ij}^2}S\left(\sum_ix_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)
Again, if there are weights \mathbf{\omega}=(\omega_i), the coordinate-wise update becomes
\beta_{j,\lambda,{\color{red}{\omega}}} = \frac{1}{\sum_i {\color{red}{\omega_i}}x_{ij}^2}S\left(\sum_i{\color{red}{\omega_i}}x_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)
The code to compute this componentwise descent is

lasso_coord_desc = function(X,y,beta,lambda,tol=1e-6,maxiter=1000){
  beta = as.matrix(beta)
  X = as.matrix(X)
  omega = rep(1/length(y),length(y))
  obj = numeric(length=(maxiter+1))
  betalist = list(length(maxiter+1))
  betalist[[1]] = beta
  beta0list = numeric(length(maxiter+1))
  beta0 = sum(y-X%*%beta)/(length(y))
  beta0list[1] = beta0
  for (j in 1:maxiter){
    for (k in 1:length(beta)){
      r = y - X[,-k]%*%beta[-k] - beta0*rep(1,length(y))
      beta[k] = (1/sum(omega*X[,k]^2))*
        soft_thresholding(t(omega*r)%*%X[,k],length(y)*lambda)
    }
    beta0 = sum(y-X%*%beta)/(length(y))
    beta0list[j+1] = beta0
    betalist[[j+1]] = beta
    obj[j] = (1/2)*(1/length(y))*norm(omega*(y - X%*%beta - 
           beta0*rep(1,length(y))),'F')^2 + lambda*sum(abs(beta))
    if (norm(rbind(beta0list[j],betalist[[j]]) - 
             rbind(beta0,beta),'F') &lt; tol) { break } 
  } 
  return(list(obj=obj[1:j],beta=beta,intercept=beta0)) }

For instance, consider the following (simple) dataset, with three covariates

chicago = read.table("http://freakonometrics.free.fr/chicago.txt",header=TRUE,sep=";")

that we can “normalize” (or “standardize“)

X = model.matrix(lm(Fire~.,data=chicago))[,2:4]
for(j in 1:3) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
y = chicago$Fire
y = (y-mean(y))/sd(y)

To initialize the algorithm, use the OLS estimate

beta_init = lm(Fire~0+.,data=chicago)$coef

For instance

lasso_coord_desc(X,y,beta_init,lambda=.001)
$obj
[1] 0.001014426 0.001008009 0.001009558 0.001011094 0.001011119 0.001011119
 
$beta
          [,1]
X_1  0.0000000
X_2  0.3836087
X_3 -0.5026137
 
$intercept
[1] 2.060999e-16

and we can get the standard Lasso plot by looping,