Not able to use multibel GPUs when training a DDPG agent

조회 수: 4 (최근 30일)
Benedict Bauer
Benedict Bauer 2024년 1월 18일
Hi there,
I am having some problems training a DDPG agent on a local machine with multiple (4) GPUs. There is always only one GPU doing any work and I don't know what I'm doing wrong.
I am using a parapool with 4 workers:
parpool('Processes',4);
With
spmd
gpuDevice
end
I can see, that each worker is using its own GPU.
The circuit uses the UseDevice option:
critic = rlQValueFunction(criticNet,obsInfo,actInfo,...
'UseDevice','gpu', ...
ObservationInputNames="obsInLyr",ActionInputNames="actInLyr");
As well as the actor:
actor = rlContinuousDeterministicActor(actorNet,obsInfo,actInfo, ...
'UseDevice','gpu');
The training is using the foloing training option:
trainingOpts = rlTrainingOptions(...
MaxEpisodes=maxepisodes,...
MaxStepsPerEpisode=maxsteps,...
Verbose=true,...
Plots="none",...
StopTrainingCriteria="AverageReward",...
StopTrainingValue=500000, ...
ScoreAveragingWindowLength=5, ...
SaveAgentCriteria="AverageReward", ...
SaveAgentValue=70000);
trainingOpts.UseParallel = true;
trainingOpts.ParallelizationOptions.Mode = "async";
and is startet using:
agent = rlDDPGAgent(actor,critic,agentOptions);
trainingStats = train(agent,env,trainingOpts);
Have I forgotten anything else?
I would be very happy to receive help.

답변 (1개)

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2024년 1월 24일
Can you share your agent options and the architecture of the actor and critic networks? As mentioned here, "Using GPUs is likely to be beneficial when you have a deep neural network in the actor or critic which has large batch sizes or needs to perform operations such as multiple convolutional layers on input images". So it could be that there is no need to use more

카테고리

Help CenterFile Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by