Parallel CPU computing for recurrent Neural Networks (LSTMs)

조회 수: 9 (최근 30일)
ThomasP
ThomasP 2022년 2월 3일
답변: Joss Knight 2022년 2월 7일
Hello,
states that parallel CPU computing for LSTMs is possible using the trainNetwork function and choosing the execution environment as parallel using trainingOptions. It also states that the Parallel Computing Toolbox is necessary.
I do have the Parallel Computing Toolbox installed, writing pool = parpool gives me the number of workers as 23 (the amount of cores my CPU has)
I also added 'ExecutionEnvironment','parallel' to my trainingOptions(), however, I get the error that "Parallel training of recurrent networks is not supported. 'ExecutionEnvironment' value in trainingOptions function must be 'auto', 'gpu' or 'cpu'"
...why?

답변 (2개)

Raymond Norris
Raymond Norris 2022년 2월 4일
I'm assuming you're only running this on your local machine (with 23 cores)? And I'm assuming you don't have a GPU? If so, set ExecutionEnvironment to "cpu" (or even "auto", which defaults to gpu if it exists and cpu if a gpu doesn't exist).
  댓글 수: 2
ThomasP
ThomasP 2022년 2월 4일
thanks for your answer, yes I'm running it on my local machine with 23 cores and don't have a gpu, however, if I set ExecutionEnvironment to "cpu" it will only run on a single core
Raymond Norris
Raymond Norris 2022년 2월 4일
Right, fair point. One option is to download the R2022a prelease to see if that resolves your issue.
Keep in mind, "parallel" will default to (any) GPU MATLAB finds. Therefore, you'll want MATLAB to ignore it by first calling
setenv CUDA_VISIBLE_DEVICES -1
and then train your model.

댓글을 달려면 로그인하십시오.


Joss Knight
Joss Knight 2022년 2월 7일
That doc page is about shallow networks (using train) rather than deep networks (using trainNetwork). Parallel training in trainNetwork for sequence networks is supported from the next release.
How are you confirming that ExecutionEnvironment 'cpu' is only using a single core? It should be using all your cores.
Parallel training for CPU is only really useful when you have a multi-node cluster of machines. Generally speaking all CPU Deep Learning code is multithreaded and makes full use of your hardware and there is no advantage to parallel training or inference - in fact it should make it slower.

카테고리

Help CenterFile Exchange에서 Parallel and Cloud에 대해 자세히 알아보기

제품


릴리스

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by