Advantages of parpool vs. job/tasks vs. multiple batches?

조회 수: 11 (최근 30일)
emarch
emarch 2018년 10월 29일
댓글: Edric Ellis 2018년 11월 1일
I have an "embarrassingly" parallel Matlab problem I am looking to "parallelize" and I was just thinking about the various strategies. I've tried all three of the approaches mentioned in the question title and would just be curious to get some more experienced Matlab users thoughts. Essentially I am just running the same function with different data, and collecting the results.
In my experiments I've noticed that creating multiple batches incurs a significant startup time vs. creating a job with multiple tasks. The only reason I was even considering a multiple batches approach is I was thinking there might be a certain robustness in this approach, in the sense that if one batch fails due to bad data (or a node going offline, etc, etc...), you would still have the results from the other batches. One could then resubmit the batches that failed. Can a job/tasks approach be made equally robust? What happens if a task irrecoverably fails or hangs? Is there some way to recover the results from the other tasks?
As for parpool, is there an advantage to this approach that I'm missing beyond the automatic slicing of variables? Variable slicing is something I could accomplish manually using jobs/tasks or multiple batches.
Regardless of which approach is taken the job will be accomplished via a submitted batch (or batches), as it will likely take quite some time to run and being able to exit out of Matlab on the submitter machine will be nice.

채택된 답변

Edric Ellis
Edric Ellis 2018년 10월 30일
If you want to be able to quit the client machine while the process is running, then either batch or createJob & createTask is the way to go.
As you observe, there is some additional overhead when creating multiple batch jobs compared to a single createJob invocation and then multiple (or vectorised) createTask invocations. (This can be to do with the analysis of the code files required to run the job etc.)
The simplest (from a coding perspective) option is to prototype your code using an interactive parpool with a parfor loop, and then offload using batch specifying the 'Pool' parameter to indicate how many workers to use.
Using multiple independent tasks is more invasive compared to the batch + 'Pool' approach, but it does give you a degree of resilience against individual worker failures.
  댓글 수: 2
emarch
emarch 2018년 10월 31일
Thanks for the reply. I've been doing more experimenting and it looks like MATLAB multi-task jobs are fairly robust in that even if tasks fail or the node goes offline the wait(job) call will eventually return. Rather than using fetchOutputs(job) it looks like it's best to iterate through the tasks and check for errors before grabbing the output. I tested by killing some MATLAB processes on the cluster in Task Manager and I still got some results. I'm guessing some sort of heartbeat system must be used.
Can you think of any circumstances where a renegade task might prevent one from fetching the results from tasks that completely successfully?
Edric Ellis
Edric Ellis 2018년 11월 1일
For independent tasks, there should be no such interference.

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Parallel Computing Fundamentals에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by