Sometimes, with big data, matrices are too big to handle, and it is possible to use tricks to numerically still do the map. Map-Reduce is one of those. With several cores, it is possible to split the problem, to map on each machine, and then to agregate it back at the end.
Consider the case of the linear regression, \mathbf{y}=\mathbf{X}\mathbf{\beta}+\mathbf{\varepsilon} (with classical matrix notations). The OLS estimate of \mathbf{\beta} is \widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}. To illustrate, consider a not too big dataset, and run some regression.
lm(dist~speed,data=cars)$coefficients (Intercept) speed -17.579095 3.932409 y=cars$dist X=cbind(1,cars$speed) solve(crossprod(X,X))%*%crossprod(X,y) [,1] [1,] -17.579095 [2,] 3.932409 |
How is this computed in R? Actually, it is based on the QR decomposition of \mathbf{X}, \mathbf{X}=\mathbf{Q}\mathbf{R}, where \mathbf{Q} is an orthogonal matrix (ie \mathbf{Q}^T\mathbf{Q}=\mathbb{I}). Then \widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}=\mathbf{R}^{-1}\mathbf{Q}^T\mathbf{y}
solve(qr.R(qr(as.matrix(X)))) %*% t(qr.Q(qr(as.matrix(X)))) %*% y [,1] [1,] -17.579095 [2,] 3.932409 |
So far, so good, we get the same output. Now, what if we want to parallelise computations. Actually, it is possible.
Consider m blocks
m = 5 |
and split vectors and matrices
\mathbf{y}=\left[\begin{matrix}\mathbf{y}_1\\\mathbf{y}_2\\\vdots \\\mathbf{y}_m\end{matrix}\right] and \mathbf{X}=\left[\begin{matrix}\mathbf{X}_1\\\mathbf{X}_2\\\vdots\\\mathbf{X}_m\end{matrix}\right]=\left[\begin{matrix}\mathbf{Q}_1^{(1)}\mathbf{R}_1^{(1)}\\\mathbf{Q}_2^{(1)}\mathbf{R}_2^{(1)}\\\vdots \\\mathbf{Q}_m^{(1)}\mathbf{R}_m^{(1)}\end{matrix}\right]
To split vectors and matrices, use (eg)
Xlist = list() for(j in 1:m) Xlist[[j]] = X[(j-1)*10+1:10,] ylist = list() for(j in 1:m) ylist[[j]] = y[(j-1)*10+1:10] |
and get small QR recomposition (per subset)
QR1 = list() for(j in 1:m) QR1[[j]] = list(Q=qr.Q(qr(as.matrix(Xlist[[j]]))),R=qr.R(qr(as.matrix(Xlist[[j]])))) |
Consider the QR decomposition of \mathbf{R}^{(1)} which is the first step of the reduce part\mathbf{R}^{(1)}=\left[\begin{matrix}\mathbf{R}_1^{(1)}\\\mathbf{R}_2^{(1)}\\\vdots \\\mathbf{R}_m^{(1)}\end{matrix}\right]=\mathbf{Q}^{(2)}\mathbf{R}^{(2)}where\mathbf{Q}^{(2)}=\left[\begin{matrix}\mathbf{Q}^{(2)}_1\\\mathbf{Q}^{(2)}_2\\\vdots\\\mathbf{Q}^{(2)}_m\end{matrix}\right]
R1 = QR1[[1]]$R for(j in 2:m) R1 = rbind(R1,QR1[[j]]$R) Q1 = qr.Q(qr(as.matrix(R1))) R2 = qr.R(qr(as.matrix(R1))) Q2list=list() for(j in 1:m) Q2list[[j]] = Q1[(j-1)*2+1:2,] |
Define – as step 2 of the reduce part\mathbf{Q}^{(3)}_j=\mathbf{Q}^{(2)}_j\mathbf{Q}^{(1)}_j
and\mathbf{V}_j=\mathbf{Q}^{(3)T}_j\mathbf{y}_j
Q3list = list() for(j in 1:m) Q3list[[j]] = QR1[[j]]$Q %*% Q2list[[j]] Vlist = list() for(j in 1:m) Vlist[[j]] = t(Q3list[[j]]) %*% ylist[[j]] |
and finally set – as the step 3 of the reduce part\widehat{\mathbf{\beta}}=[\mathbf{R}^{(2)}]^{-1}\sum_{j=1}^m\mathbf{V}_j
sumV = Vlist[[1]] for(j in 2:m) sumV = sumV+Vlist[[j]] solve(R2) %*% sumV [,1] [1,] -17.579095 [2,] 3.932409 |
It looks like we’ve been able to parallelise our linear regression…
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (June 18, 2018). Linear Regression, with Map-Reduce. Freakonometrics. Retrieved September 19, 2024 from https://doi.org/10.58079/ovb5
How do I supposed to calculate inverse of R^(2), when R^(2) is a vector of matrices?
Great post! One question: mathematically, can you explain – provide a reference for the explanation of – the step 2 of the reduce part (where the 2 Q matrices are multiplied)? Because I can follow through all steps, but I don’t understand *why* you create Q_j^(3) like this. Thanks!
Another interesting point is the trade-off between the number of nodes and the efficiency of the calculation. If you think about it – mode nodes *does not* mean faster running time.
I took the liberty of reproducing your results (in “tidyverse” code”) and added some notes on efficiency. I have an R notebook/markdown here: https://github.com/ytoren/reproducible/tree/master/linear-regression-map-reduce
Keep up the good work mate!
Awesome post!
I think you might have a typo :
You split Q2 to Q2list by:
> for(j in 1:m) Q2list[[j]] = Q1[(j-1)*2+1:2,]
And it should be
> for(j in 1:m) Q2list[[j]] = Q2[(j-1)*2+1:2,]
indeed… thanks !