Yesterday, Daniel Marcelino published an interesting post on his blog, untitled Parallel Processing: When does it worth ? I was asking myself the same question for a chapter I am currently writing. And I did like his approach, so I tried, on my computer to do the same. I did use three packages to run parallel R codes,
> library(multicore) > library(snow) > library(snowfall)
and one to quantify time to run the code
> library(microbenchmark)
I ran the code on my mac, at the office,
> all=detectCores(all.tests=TRUE) > all [1] 4
which is a standard computer, with four cores. To run some codes, I had to generate datasets. Here, I consider a data frame, with rows, and 100 columns. I generate values using a Gaussian distribution,
> gen=function(n) data.frame(matrix(rnorm(n*100),n,100))
The goal, here, will be to compute quantiles (or to be more specific quartiles) per column, and to replicate that 100 times. Here, the standard technique is to use lapply. But two (at least) parallel version of the function can be found. So, let us use it
> base=gen(n=100) > microbenchmark( + mlapp=data.frame(lapply(base, quantile, probs = 1:3/4 )), + mclapp=data.frame(mclapply(base, quantile, probs = 1:3/4 , mc.cores = all)), + sflapp=data.frame(sfLapply(base, quantile, probs = 1:3/4 )), + times=100) -> m
For instance, with 100 rows, we have
> m Unit: milliseconds expr min lq median uq max 1 mclapp 50.19290 55.90364 57.99185 64.10619 266.88692 2 mlapp 26.94146 29.49396 31.20571 49.54824 75.60251 3 sflapp 27.54857 30.10224 31.41864 47.10688 59.28925
And with 500,000 rows, we have
> m Unit: seconds expr min lq median uq max 1 mclapp 42.999504 103.873919 161.989876 258.66887 660.2953 2 mlapp 3.720542 3.770319 4.070116 11.90181 166.9461 3 sflapp 3.587703 3.770399 4.027876 10.62654 181.0093
So yes, using parallel code would be very interesting ! Especially with very large datasets (I could not run it with 1 million rows). If we consider a loop, to see the evolution of the median time, for each of those three function, we can plot the time it took, as a function of the number of rows,
> i=1;vk=seq(1,6,by=.2) > col=seq(i,3*2,by=3) > plot(10^vk,db[2,col],ylim=range(db),col="white",log="x", + xlab="Number of rows",ylab="Time") + polygon(c(10^vk,rev(10^vk)),c(db[1,col],rev(db[3,col])),col="light blue",border=NA) + lines(10^vk,db[2,col],col="blue",lwd=2)
Here, we have the following, with the standard lapply on the left (the line if the median time, with quartiles, 25% and 75%), the multicore function in the middle, and the snowfall function, on the right,
If we zoom in, for small datasets (less than 10,000 rows and 100 columns), we do observe a gain, since the code ran two times faster
So clearly, it might be interesting to write codes to distribute on different cores. But here, I use a simple function (I compute quantiles on columns of a dataset). I should try with a more complex function…
On the other hand, I should mention that, usually, while I have have one (or two) codes running, I can do something else : seeking for recent papers for ongoing research projects, answer to emails that I should have answered a few weeks ago, checking for typos in the book and update the tex file, or type parts of a future posts on my blog, etc. The problem I got yesterday afternoon, when I ran the code, was that suddenly, all the cores on my computer were dedicated to that R code. I could not even finish an email I started before running the code… So finally I left earlier, decided to pick up the kids after school, and went to the park, to enjoy the sunny day we had ! So I have to admit that running parallel codes can have advantages you could not think of !
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (May 31, 2013). Are parallel computations worth it ? Freakonometrics. Retrieved December 7, 2024 from https://doi.org/10.58079/ouqj
Hi
If you want to run code on all your cores and still be able to surf the internet or answer your mails, do the following:
-on linux, use “nice”. Google “linux nice” to see how.
-on windows, set the priority of one R process to “low”:
–Hit Ctrl+Shift+Esc, the task manager opens
–right clic one R process (in the process tab).
–Set the priority to “low”
This way, R will run with all the CPU your are not currently using for other stuff. This means tht if you read a webpage, R will use 100% of the CPU. If you clic a link, while the new webpage loads, all the CPU power will go to, say, firefox, during about 2sec, and once the page is rendered, go back to R.
You won’t notice the difference in terms of either using your browser and you won’t notice almost any difference in terms of how long your script R takes to run.
I hope this help you, let me know.
PS: once you understand the idea of priority, you can play around with the task manager as much as you want.
Arthur, nice experiment. I’m thinking instead of dedicate a given number of CPUs, one could program in terms of percentage of the capacity of the computer, for instance 75%, 85%, 90% and forth.
or (for simple functions) see how to use C/C++ (with Rcpp) and how it might affect computation time. But to be honest, I have a tradeoff, research versus blog… hard to do both !
Thanks for a nice post. I liked you philosophical analysis on the advantages of parallel computing!
Not sure I understand the output from microbenchmark in your article. When I tried your code, I got values for mclapp that were lower than the ones for mlapp – was there a problem with labelling of the columns in your text?
The values I got for sflapp are about the same as the ones for mlapp – apparently sflapp still does things in sequence rather than in parallel on my machine. Will have to look into this. I am new to parallel computing, still a lot to try and figure out!
Here’s the output from your microbenchmark program on my machine (8 cores i7, ubuntu, R 3.0):
Unit: seconds
expr min lq median uq max neval
mlapp 4.551621 4.630141 4.637104 4.819816 5.009368 100
mclapp 1.282461 1.430697 1.444762 1.569997 1.901679 100
sflapp 4.554641 4.629345 4.633371 4.689492 4.895747 100
You have 8 cores, while I had 4… I will check again. Actually, I have access to a Windows Server, with a lot of cores, so I try to see the impact of the number of cores (as you can see, parallelizing is also new for me, but I am willing to share the experiences I am running !). I will check on my new laptop, when I’ll get it (it will be an ubuntu machine too)
One step ahead is to run your parallelized R code on a cluster using Rmpi package… In fact, I have decided to try it soon !