Quantcast
Channel: Active questions tagged r - Stack Overflow
Viewing all articles
Browse latest Browse all 204742

mclapply in R programming- How can I make sure all my cores and memory is utilized

$
0
0

I am working on doing some computation using parallel::mclapply to parallelize the process. I have a High performance computing server (HPC) with 64GB memory and 28 cores CPU. Speed of the code run has increased immensely after parallelizing, but lot of memory and cpu cores are getting wasted. How can I make it more efficient?

Here is the sample code:

data_sub <- do.call(rbind, mclapply(ds,predict_function,mc.cores=28)) 

The predict_function contains a small function to create snaive, naive or Arima methods which will be decided before the logic reaches the above line.

Here is what I often see on the log:

enter image description here

The first row indicates that the job has wasted 51 gig of RAM and utilized less than half of the CPU allocated. The third row indicates same program run with same data, but has used more than allocated memory despite under utilized the CPU cores.

Three questions currently running in my head: How would HPC allocate memory for each job?? Can I split the memory and cores in my R program to run two functions parallely? Say run snaive method in 14 cores and allocate rest 14 to Arima? How can I make my job utilize all the memory and CPU cores to make it faster?

Thanks in advance


Viewing all articles
Browse latest Browse all 204742

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>