Reinforcement Learning with Parallel Computing

조회 수: 7 (최근 30일)
PB75
PB75 2021년 8월 9일
댓글: PB75 2021년 8월 9일
Hi All,
I have been training a TD3 RNN agent on my local PC for montrhs now, due to the long training period due to the performance of my PC I have been saving the buffer, so can I can reload the pretrained agent to restart training.
I now have access to my University HPC server, so can now use parallel computing to speed up the training process.
However, now when I attempt run the restart training with the pretrained agent, now with parallel computing on the HPC server, (which has prevously been running on my local PC with no issues with NO parallel computing) it flags the following issue.
Do I need to start with a fresh agent now I am using parallel computing?
Also is the following code to start parallel computing correct?
% trainingOpts.UseParallel = true;
% trainingOpts.ParallelizationOptions.Mode = 'async';
% trainingOpts.ParallelizationOptions.DataToSendFromWorkers = 'Experiences';
Thanks
Patrick

답변 (1개)

Drew Davis
Drew Davis 2021년 8월 9일
As of R2021a, the RL Toolbox does not support parallel training with RNN networks.
You can still reuse your current experience buffer for training new networks by replacing the actor and critic for TD3
agent.AgentOptions.ResetExperienceBufferBeforeTraining = false;
setActor(agent,statelessActor);
setCritic(agent,statelessCritic);
Your snippet to setup TD3 parallel training looks good.
Hope this helps
Drew
  댓글 수: 1
PB75
PB75 2021년 8월 9일
Hi Drew,
Thanks for your reply, so I cannot use LSTM layers with parallel training?

댓글을 달려면 로그인하십시오.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by