On the robustness of LASSO

Probably the last post on lasso, before the summer break… More specifically, I was wondering about the interpretation of graphs \lambda\mapsto\widehat{\beta}_\lambda. We use them for variable selection, but my major concern was about confidence intervals : how can we trust those lines ?

As usual, a natural way is to use simulations on generated datasets. Consider for instance

Sigma = matrix(c(1,.8,.2,.8,1,.4,.2,.4,1),3,3)
n = 1000
library(mnormt)
X = rmnorm(n,rep(0,3),Sigma)
set.seed(123)
df = data.frame(X1=X[,1],X2=X[,2],X3=X[,3],X4=rnorm(n),
              X5=runif(n),
              X6=exp(X[,3]),
              X7=sample(c("A","B"),size=n,replace=TRUE,prob=c(.5,.5)),
              X8=sample(c("C","D"),size=n,replace=TRUE,prob=c(.5,.5)))
df$Y = 1+df$X1-df$X4+5*(df$X7=="A")+rnorm(n)

One can use other simulations of datasets, and store the output

vlambda = exp(seq(-8,1,length=201))
lasso = glmnet(x=X,y=df[,"Y"],family="gaussian",alpha=1,
             lambda=vlambda,standardize=TRUE)
VLASSO[[s]] = as.matrix(lasso$beta)

To visualize confidence bands, one can compute quantiles

Q05=Q95=Qm=matrix(NA,9,201)
for(i in 1:nrow(Q05)){
  for(j in 1:ncol(Q05)){
    v = unlist(lapply(VLASSO,function(x) x[i,j]))
    Q05[i,j] = quantile(v,.05)
    Q95[i,j] = quantile(v,.95)
    Qm[i,j]  = mean(v)
  }}

and get get the graph

plot(lasso,col=colrs,"lambda"ylim=c(min(Q05),max(Q95)))
colrs=c(brewer.pal(8,"Set1"))
polygon(c(log(lasso$lambda),rev(log(lasso$lambda))),
          c(Q05[2,],rev(Q95[2,])),col=colrs[1],border=NA)
polygon(c(log(lasso$lambda),rev(log(lasso$lambda))),
        c(Q05[5,],rev(Q95[5,])),col=colrs[2],border=NA)
polygon(c(log(lasso$lambda),rev(log(lasso$lambda))),
        c(Q05[8,],rev(Q95[8,])),col=colrs[3],border=NA)

An alternative (more realistic on real data) is to use bootstrapped version of the dataset

id = sample(1:nrow(X),size=nrow(X),replace=TRUE)
lasso = glmnet(x=X[id,],y=df[id,"Y"],family="gaussian",alpha=1,
               lambda=vlambda,standardize=TRUE)


So far, it looks it’s working very well. Now, what if we have a smaller dataset

n = 100

On simulated new samples, we get


while the bootstrap version is

There is more uncertainty, clearly, but the conclusion is not ambiguous here.

Now, what about real data. Consider the following

chicago = read.table("http://freakonometrics.free.fr/chicago.txt",header=TRUE,sep=";")
tail(chicago)
   Fire   X_1 X_2    X_3
42  4.8 0.152  19 13.323
43 10.4 0.408  25 12.960
44 15.6 0.578  28 11.260
45  7.0 0.114   3 10.080
46  7.1 0.492  23 11.428
47  4.9 0.466  27 13.731

with one variable of interest (the number of fires, per unhabitants) and 3 features. We can here use bootstrap to generate samples, and then fit a lasso regression. On the original dataset, the regression is

X = model.matrix(lm(Fire~.,data=chicago))
 id = sample(1:nrow(X),size=nrow(X),replace=TRUE)
 vlambda = exp(seq(-4,2,length=201))
 lasso = glmnet(x=X[id,],y=chicago[id,"Fire"],family="gaussian",alpha=1,
               lambda=vlambda,standardize=TRUE)

And if we just plot lines \lambda\mapsto\widehat{\beta}_\lambda we get

Now, consider bootstrap samples.

for(s in 1:100){
  id=sample(1:nrow(X),size=nrow(X),replace=TRUE)
  library(glmnet)
  vlambda=exp(seq(-4,2,length=201))
  lasso=glmnet(x=X[id,],y=chicago[id,"Fire"],family="gaussian",alpha=1,
               lambda=vlambda,standardize=TRUE)
  plot(lasso,col=colrs,"lambda",lwd=.2,add=TRUE)}

We get here

The interpretation here is much more difficult

What about the order ?

N=matrix(NA,100000,4)
for(s in 1:100000){
  id=sample(1:nrow(X),size=nrow(X),replace=TRUE)
  library(glmnet)
  vlambda=exp(seq(-4,2,length=201))
  lasso=glmnet(x=X[id,],y=chicago[id,"Fire"],
               family="gaussian",alpha=1,
               lambda=vlambda,standardize=TRUE)
  N[s,]=names(sort(apply(as.matrix(lasso$beta),
        1,function(x) sum(x!=0))))}

The ordering that was obtained on the original dataset was the same in 56% of the scenarios,

mean(apply(N,1,function(x) paste(x,collapse="")=="(Intercept)X_1X_2X_3"))
[1] 0.5693

We can look at all the cases,

L=as.character(c(123,132,213,231,312,321))
Li=paste("(Intercept)X_",substr(L,1,1),"X_",
         substr(L,2,2),"X_",substr(L,3,3),sep="")
g=function(y) mean(apply(N,1,function(x) paste(x,collapse="")==y))
vL=unlist(lapply(Li,g))
names(vL)=L
barplot(vL,las=2,horiz=TRUE)

Standardization in LASSO

The lasso regression is based on the idea of solving\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|_{\ell_1}\rbracewhere\Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|for any \mathbf{a}\in\mathbb{R}^d. In a recent post, we’ve seen computational aspects of the optimization problem. But I went quickly throught the story of the \ell_1-norm. Because it means, somehow, that the value of \beta_1 and \beta_2 should be comparable. Somehow, with two significant variables, with very different scales, we should expect orders (or relative magnitudes) of \widehat{\beta}_1 and \widehat{\beta}_2 to be very very different. So people say that it is therefore necessary to center and reduce (or standardize) the variables.

Consider the following (simulated) dataset

Sigma = matrix(c(1,.8,.2,.8,1,.4,.2,.4,1),3,3)
n = 1000
library(mnormt)
X = rmnorm(n,rep(0,3),Sigma)
set.seed(123)
df = data.frame(X1=X[,1],X2=X[,2],X3=X[,3],X4=rnorm(n),
X5=runif(n),X6=exp(X[,3]),
X7=sample(c("A","B"),size=n,replace=TRUE,prob=c(.5,.5)),
X8=sample(c("C","D"),size=n,replace=TRUE,prob=c(.5,.5)))
df$Y = 1+df$X1-df$X4+5*(df$X7=="A")+rnorm(n)
X = model.matrix(lm(Y~.,data=df))

Use the following colors for the graphs and the value of \lambda

library("RColorBrewer")
colrs = c(brewer.pal(8,"Set1"))[c(1,4,5,2,6,3,7,8)]
vlambda=exp(seq(-8,1,length=201))

The first regression we can run is a non-standardized one

library(glmnet)
lasso = glmnet(x=X,y=df[,"Y"],family="gaussian",alpha=1,lambda=vlambda,standardize=FALSE)

We can visualize the graphs of \lambda\mapsto\widehat{\beta}_\lambda

idx = which(apply(lasso$beta,1,function(x) sum(x==0))<200)
plot(lasso,col=colrs,'lambda',xlim=c(-5.5,2.3),lwd=2)
legend(1.2,.9,legend=paste('X',0:8,sep='')[idx],col=colrs,lty=1,lwd=2)

At least, observe that the most significant variables are the one that were used to generate the data.

Now, consider the case that we standardize the data

lasso = glmnet(x=X,y=df[,"Y"],family="gaussian",alpha=1,lambda=vlambda,standardize=TRUE)

The graphs of \lambda\mapsto\widehat{\beta}_\lambda

The graph is (strangely) very similar to the previous one. Except perhaps for the green curve. Maybe that categorical are not simular to continuous variables… Because somehow, standardisation of categorical variables might be not natural…

Why not consider some home-made function ? Let us transform (linearly) all variable in the X matrix (except the first one, which is the intercept)

Xc = X
for(j in 2:ncol(X)) Xc[,j]=(Xc[,j]-mean(Xc[,j]))/sd(Xc[,j])

Now, we can run our lasso regression on that one (with the intercept since all the variables are centered, but y)

lasso = glmnet(x=Xc,y=df$Y,family="gaussian",alpha=1,intercept=TRUE,lambda=vlambda)

The plot is now

plot(lasso,col=colrs,"lambda",xlim=c(-6.7,1.3),lwd=2)
idx = which(apply(lasso$beta,1,function(x) sum(x==0))<length(vlambda))
legend(.15,.45,legend=paste('X',0:8,sep='')[idx],col=colrs,lty=1,bty="n",lwd=2)

Actually, why not also center the y variable, and remove also the intercept

Yc = (df[,"Y"]-mean(df[,"Y"]))/sd(df[,"Y"])
lasso = glmnet(x=Xc,y=Yc,family="gaussian",alpha=1,intercept=FALSE,lambda=vlambda)

Hopefully, those graphs are very consistent (and if we use those for variable selection, they suggest to use variables that were actually used to generate the dataset). And having qualitative and quantitative variable is not a big deal. But still, I do not feel confortable with the differences…

Convex Regression Model

This morning during the lecture on nonlinear regression, I mentioned (very) briefly the case of convex regression. Since I forgot to mention the codes in R, I will publish them here. Assume that y_i=m(\mathbf{x}_i)+\varepsilon_i where m:\mathbb{R}^d\rightarrow \mathbb{R} is some convex function.

Then m is convex if and only if \forall\mathbf{x}_1,\mathbf{x}_2\in\mathbb{R}^d, \forall t\in[0,1], m(t\mathbf{x}_1+[1-t]\mathbf{x}_2) \leq tm(\mathbf{x}_1)+[1-t]m(\mathbf{x}_2)Hidreth (1954) proved that if m^\star=\underset{m \text{ convex}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \big(y_i-m(\mathbf{x_i})\big)^2\right\rbracethen \mathbf{\theta}^\star=(m^\star(\mathbf{x_1}),\cdots,m^\star(\mathbf{x_n})) is unique.

Let \mathbf{y}=\mathbf{\theta}+\mathbf{\varepsilon}, then \mathbf{\theta}^\star=\underset{\mathbf{\theta}\in \mathcal{K}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \big(y_i-\theta_i)\big)^2\right\rbracewhere\mathcal{K}=\{\mathbf{\theta}\in\mathbb{R}^n:\exists m\text{ convex },m(\mathbf{x}_i)=\theta_i\}. I.e. \mathbf{\theta}^\star is the projection of \mathbf{y} onto the (closed) convex cone \mathcal{K}. The projection theorem gives existence and unicity.

For convenience, in the application, we will consider the real-valued case, m:\mathbb{R}\rightarrow \mathbb{R}, i.e. y_i=m(x_i)+\varepsilon_i. Assume that observations are ordered x_1\leq x_2\leq\cdots \leq x_n. Here \mathcal{K}=\left\lbrace\mathbf{\theta}\in\mathbb{R}^n:\frac{\theta_2-\theta_1}{x_2-x_1}\leq \frac{\theta_3-\theta_2}{x_3-x_2}\leq \cdots \leq \frac{\theta_n-\theta_{n-1}}{x_n-x_{n-1}}\right\rbrace

Hence, quadratic program with n-2 linear constraints.

m^\star is a piecewise linear function (interpolation of consecutive pairs (x_i,\theta_i^\star)).

If m is differentiable, m is convex if m(\mathbf{x})+ \nabla m(\mathbf{x})^{\text{T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y})

More generally, if m is convex, then there exists \xi_{\mathbf{x}}\in\mathbb{R}^n such that m(\mathbf{x})+ \xi_{\mathbf{x}}^{\text{ T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y})
\xi_{\mathbf{x}} is a subgradient of m at {\mathbf{x}}. And then \partial m(\mathbf{x})=\big\lbrace m(\mathbf{x})+ \xi^{\text{ T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y}),\forall \mathbf{y}\in\mathbb{R}^n\big\rbrace

Hence, \mathbf{\theta}^\star is solution of \text{argmin}\big\lbrace\|\mathbf{y}-\mathbf{\theta}\|^2\big\rbrace\text{subject to }\theta_i+\xi_i^{\text{ T}}[\mathbf{x}_j-\mathbf{x}_i]\leq\mathbf{\theta}_j,~\forall i,j and \xi_1,\cdots,\xi_n\in\mathbb{R}^n. Now, to do it for real, use cobs package for constrained (b)splines regression,

library(cobs)

To get a convex regression, use

plot(cars)
x = cars$speed
y = cars$dist
rc = conreg(x,y,convex=TRUE)
lines(rc, col = 2)


Here we can get the values of the knots

rc
 
Call:  conreg(x = x, y = y, convex = TRUE) 
Convex regression: From 19 separated x-values, using 5 inner knots,
     7,    8,    9,   20,   23.
RSS =  1356; R^2 = 0.8766;
 needed (5,0) iterations

and actually, if we use them in a linear-spline regression, we get the same output here

reg = lm(dist~bs(speed,degree=1,knots=c(4,7,8,9,,20,23,25)),data=cars)
u = seq(4,25,by=.1)
v = predict(reg,newdata=data.frame(speed=u))
lines(u,v,col="green")

Let us add vertical lines for the knots

abline(v=c(4,7,8,9,20,23,25),col="grey",lty=2)

Summer School, Big Data and Economics

This week I will be giving a lecture at the  2018 edition of the Summer School at the UB School of Economics, in Barcelona. It will be a four day crash course, starting on Tuesday (morning).

Lecture 1: Introduction : Why Big Data brings New Questions
Lecture 2: Simulation Based Techniques & Bootstrap
Lecture 3: Loss Functions : from OLS to Quantile Regression
Lecture 4: Nonlinearities and Discontinuities
Lecture 5: Cross-Validation and Out-of-Sample diagnosis
Lecture 6: Variable and model selection
Lecture 7: New Tools for Classification Problems
Lecture 8: New Tools for Time Series & Forecasting

Some slides are available on github, and probably more interesting, I will upload a R markdown with all the codes.

Game of Friendship Paradox

In the introduction of my course next week, I will (briefly) mention networks, and I wanted to provide some illustration of the Friendship Paradox. On network of thrones (discussed in Beveridge and Shan (2016)), there is a dataset with the network of characters in Game of Thrones. The word “friend” might be abusive here, but let’s continue to call connected nodes “friends”. The friendship paradox states that

People on average have fewer friends than their friends

This was discussed in Feld (1991) for instance, or Zuckerman & Jost (2001). Let’s try to see what it means here. First, let us get a copy of the dataset

download.file("https://www.macalester.edu/~abeverid/data/stormofswords.csv","got.csv")
GoT=read.csv("got.csv")
library(networkD3)
simpleNetwork(GoT[,1:2])

Because it is difficult for me to incorporate some d3js script in the blog, I will illustrate with a more basic graph,

Consider a vertex v\in V in the undirected graph G=(V,E) (with classical graph notations), and let d(v) denote the number of edges touching it (i.e. v has d(v) friends). The average number of friends of a random person in the graph is \mu = \frac{1}{n_V}\sum_{v\in V} d(v)=\frac{2 n_E}{n_V} The average number of friends that a typical friend has is
\frac{1}{n_V}\sum_{v\in V} \left(\frac{1}{d(v)}\sum_{v'\in E_v} d(v')\right)But
\sum_{v\in V} \left(\frac{1}{d(v)}\sum_{v'\in E_v} d(v')\right)=\sum_{v,v' \in G} \left(<br /> \frac{d(v')}{d(v)}+\frac{d(v)}{d(v')}\right)=\sum_{v,v' \in G}\left(\frac{d(v')^2+d(v)^2}{d(v)d(v')}\right)=\sum_{v,v' \in G} \left(\frac{(d(v')-d(v))^2}{d(v)d(v')}+2\right){\color{red}{\succ}}\sum_{v,v' \in G} \left(2\right)=\sum_{v\in V} d(v)
Thus,\frac{1}{n_V}\sum_{v\in V} \left(\frac{1}{d(v)}\sum_{v'\in E_v} d(v')\right)\succ \frac{1}{n_V}\sum_{v\in V} d(v)
Note that this can be related to the variance decomposition \text{Var}[X]=\mathbb{E}[X^2]-\mathbb{E}[X]^2i.e.\frac{\mathbb{E}[X^2]}{\mathbb{E}[X]} =\mathbb{E}[X]+\frac{\text{Var}[X]}{\mathbb{E}[X]}\succ\mathbb{E}[X](Jensen inequality). But let us get back to our network. The list of nodes is

M=(rbind(as.matrix(GoT[,1:2]),as.matrix(GoT[,2:1])))
nodes=unique(M[,1])

and we each of them, we can get the list of friends, and the number of friends

friends = function(x) as.character(M[which(M[,1]==x),2])
nb_friends = Vectorize(function(x) length(friends(x)))

as well as the number of friends friends have, and the average number of friends

friends_of_friends = function(y) (Vectorize(function(x) length(friends(x)))(friends(y)))
nb_friends_of_friends = Vectorize(function(x) mean(friends_of_friends(x)))

We can look at the density of the number of friends, for a random node,

Nb  = nb_friends(nodes)
Nb2 = nb_friends_of_friends(nodes)
hist(Nb,breaks=0:40,col=rgb(1,0,0,.2),border="white",probability = TRUE)
hist(Nb2,breaks=0:40,col=rgb(0,0,1,.2),border="white",probability = TRUE,add=TRUE)
lines(density(Nb),col="red",lwd=2)
lines(density(Nb2),col="blue",lwd=2)


and we can also compute the averages, just to check

mean(Nb)
[1] 6.579439
mean(Nb2)
[1] 13.94243

So, indeed, people on average have fewer friends than their friends.

Parallelizing Linear Regression or Using Multiple Sources

My previous post was explaining how mathematically it was possible to parallelize computation to estimate the parameters of a linear regression. More speficially, we have a matrix \mathbf{X} which is n\times k matrix and \mathbf{y} a n-dimensional vector, and we want to compute \widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y} by spliting the job. Instead of using the n observations, we’ve seen that it was to possible to compute “something” using the first n_1 rows, then the next n_2 rows, etc. Then, finally, we “aggregate” the m objects created to get our overall estimate.

Parallelizing on multiple cores

Let us see how it works from a computational point of view, to run each computation on a different core of the machine. Each core will see a slave, computing what we’ve seen in the previous post. Here, the data we use are

y = cars$dist
X = data.frame(1,cars$speed)
k = ncol(X)

On my laptop, I have three cores, so we will split it in m=3 chunks

library(parallel)
library(pbapply)
ncl = detectCores()-1
cl = makeCluster(ncl)

This is more or less what we will do: we have our dataset, and we split the jobs,

We can then create lists containing elements that will be sent to each core, as Ewen suggested,

chunk = function(x,n) split(x, cut(seq_along(x), n, labels = FALSE))
a_parcourir = chunk(seq_len(nrow(X)), ncl)
for(i in 1:length(a_parcourir)) a_parcourir[[i]] = rep(i, length(a_parcourir[[i]]))
Xlist = split(X, unlist(a_parcourir))
ylist = split(y, unlist(a_parcourir))

It is also possible to simplify the QR functions we will use

compute_qr = function(x){
  list(Q=qr.Q(qr(as.matrix(x))),R=qr.R(qr(as.matrix(x))))
}
get_Vlist = function(j){
  Q3 = QR1[[j]]$Q %*% Q2list[[j]]
  t(Q3) %*% ylist[[j]]
}
clusterExport(cl, c("compute_qr", "get_Vlist"), envir=environment())

Then, we can run our functions on each core. The first one is

  QR1 = parLapply(cl=cl,Xlist, compute_qr)

note that it is also possible to use

  QR1 = pblapply(Xlist, compute_qr, cl=cl)

which will include a progress bar (that can be nice when the database is rather large). Then use

  R1 = pblapply(QR1, function(x) x$R, cl=cl) %&gt;% do.call("rbind", .)
  Q1 = qr.Q(qr(as.matrix(R1)))
  R2 = qr.R(qr(as.matrix(R1)))
  Q2list = split.data.frame(Q1, rep(1:ncl, each=k))
  clusterExport(cl, c("QR1", "Q2list", "ylist"), envir=environment())
  Vlist = pblapply(1:length(QR1), get_Vlist, cl=cl)
  sumV = Reduce('+', Vlist)

and finally the ouput is

solve(R2) %*% sumV
         [,1]
X1 -17.579095
X2   3.932409

which is what we were expecting…

Using multiple sources

In practice, it might also happen that various “servers” have the data, but we cannot get a copy. But it is possible to run some functions on their server, and get some output, that we can use afterwards.

Datasets are supposed to be available somewhere. We can send a request, and get a matrix. Then we we aggregate all of them, and send another request. That’s what we will do here. Provider j should run f_1(\mathbf{X}) on his part of the data, that function will return R^{(1)}_j. More precisely, to the first provider, send

function1 = function(subX){
return(qr.R(qr(as.matrix(subX))))}
R1 = function1(Xlist[[1]])

and actually, send that function to all providers, and aggregate the output

for(j in 2:m) R1 = rbind(R1,function1(Xlist[[j]]))

The create on your side the following objects

Q1 = qr.Q(qr(as.matrix(R1)))
R2 = qr.R(qr(as.matrix(R1)))
Q2list=list()
for(j in 1:m) Q2list[[j]] = Q1[(j-1)*k+1:k,]

Finally, contact one last time the providers, and send one of your objects

function2=function(subX,suby,Q){
Q1=qr.Q(qr(as.matrix(subX)))
Q2=Q
return(t(Q1%*%Q2) %*% suby)}

Provider j should then run f_2(\mathbf{X},\mathbf{y},Q_j^{(2)}) on his part of the data, using also Q_j^{(2)} as argument (that we obtained on own side) and that function will return (\mathbf{Q}^{(2)}_j\mathbf{Q}^{(1)}_j)^{T}_j\mathbf{y}_j. For instance, ask the first provider to run

sumV = function2(Xlist[[1]],ylist[[1]], Q2list[[1]])

and do the same with all providers

for(j in 2:m) sumV = sumV+ function2(Xlist[[j]],ylist[[j]], Q2list[[j]])
solve(R2) %*% sumV
         [,1]
X1 -17.579095
X2   3.932409

which is what we were expecting…

Linear Regression, with Map-Reduce

Sometimes, with big data, matrices are too big to handle, and it is possible to use tricks to numerically still do the map. Map-Reduce is one of those. With several cores, it is possible to split the problem, to map on each machine, and then to agregate it back at the end.

Consider the case of the linear regression, \mathbf{y}=\mathbf{X}\mathbf{\beta}+\mathbf{\varepsilon} (with classical matrix notations). The OLS estimate of \mathbf{\beta} is \widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}. To illustrate, consider a not too big dataset, and run some regression.

lm(dist~speed,data=cars)$coefficients
(Intercept)       speed 
 -17.579095    3.932409
y=cars$dist
X=cbind(1,cars$speed)
solve(crossprod(X,X))%*%crossprod(X,y)
           [,1]
[1,] -17.579095
[2,]   3.932409

How is this computed in R? Actually, it is based on the QR decomposition of \mathbf{X}, \mathbf{X}=\mathbf{Q}\mathbf{R}, where \mathbf{Q} is an orthogonal matrix (ie \mathbf{Q}^T\mathbf{Q}=\mathbb{I}). Then \widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}=\mathbf{R}^{-1}\mathbf{Q}^T\mathbf{y}

solve(qr.R(qr(as.matrix(X)))) %*% t(qr.Q(qr(as.matrix(X)))) %*% y
           [,1]
[1,] -17.579095
[2,]   3.932409

So far, so good, we get the same output. Now, what if we want to parallelise computations. Actually, it is possible.

Consider m blocks

m = 5

and split vectors and matrices
\mathbf{y}=\left[\begin{matrix}\mathbf{y}_1\\\mathbf{y}_2\\\vdots \\\mathbf{y}_m\end{matrix}\right] and \mathbf{X}=\left[\begin{matrix}\mathbf{X}_1\\\mathbf{X}_2\\\vdots\\\mathbf{X}_m\end{matrix}\right]=\left[\begin{matrix}\mathbf{Q}_1^{(1)}\mathbf{R}_1^{(1)}\\\mathbf{Q}_2^{(1)}\mathbf{R}_2^{(1)}\\\vdots \\\mathbf{Q}_m^{(1)}\mathbf{R}_m^{(1)}\end{matrix}\right]
To split vectors and matrices, use (eg)

Xlist = list()
for(j in 1:m) Xlist[[j]] = X[(j-1)*10+1:10,]
ylist = list()
for(j in 1:m) ylist[[j]] = y[(j-1)*10+1:10]

and get small QR recomposition (per subset)

QR1 = list()
for(j in 1:m) QR1[[j]] = list(Q=qr.Q(qr(as.matrix(Xlist[[j]]))),R=qr.R(qr(as.matrix(Xlist[[j]]))))

Consider the QR decomposition of \mathbf{R}^{(1)} which is the first step of the reduce part\mathbf{R}^{(1)}=\left[\begin{matrix}\mathbf{R}_1^{(1)}\\\mathbf{R}_2^{(1)}\\\vdots \\\mathbf{R}_m^{(1)}\end{matrix}\right]=\mathbf{Q}^{(2)}\mathbf{R}^{(2)}where\mathbf{Q}^{(2)}=\left[\begin{matrix}\mathbf{Q}^{(2)}_1\\\mathbf{Q}^{(2)}_2\\\vdots\\\mathbf{Q}^{(2)}_m\end{matrix}\right]

R1 = QR1[[1]]$R
for(j in 2:m) R1 = rbind(R1,QR1[[j]]$R)
Q1 = qr.Q(qr(as.matrix(R1)))
R2 = qr.R(qr(as.matrix(R1)))
Q2list=list()
for(j in 1:m) Q2list[[j]] = Q1[(j-1)*2+1:2,]

Define – as step 2 of the reduce part\mathbf{Q}^{(3)}_j=\mathbf{Q}^{(2)}_j\mathbf{Q}^{(1)}_j
and\mathbf{V}_j=\mathbf{Q}^{(3)T}_j\mathbf{y}_j

Q3list = list()
for(j in 1:m) Q3list[[j]] = QR1[[j]]$Q %*% Q2list[[j]]
Vlist = list()
for(j in 1:m) Vlist[[j]] = t(Q3list[[j]]) %*% ylist[[j]]

and finally set – as the step 3 of the reduce part\widehat{\mathbf{\beta}}=[\mathbf{R}^{(2)}]^{-1}\sum_{j=1}^m\mathbf{V}_j

sumV = Vlist[[1]]
for(j in 2:m) sumV = sumV+Vlist[[j]]
solve(R2) %*% sumV
           [,1]
[1,] -17.579095
[2,]   3.932409

It looks like we’ve been able to parallelise our linear regression…

An Open Lab-Notebook Experiment