- Use parallel computing and not GPU: Set StepsUntilDataIsSent to a higher value (e.g. 132, 256, etc.). This will create a bigger batch each training step (similar to NumStepsToLookAhead property of rlACAgentOptions if not use parallel computing).
- Use GPU and not parallel computing: Set NumStepsToLookAhead property to a higher value.
how to use GPU for actor and critic while env simulation happens on multiple cores for RL training
조회 수: 8 (최근 30일)
이전 댓글 표시
hi
i am new to GPU computing.
i am using reinforcement learning toolbox - particularly rlACAgent
training happens normally on multiple cores within the system. but to due to large actor and critic networks training gets slower. when i use GPU for actor and critic networks and initiate training, only 1st N (N- number of cores in pool) episodes run properly. beyond that all episodes settle at zero reward (not exactly zero but some negative number close to zero).
is there a way to use both GPU and CPU for RLtraining. GPU for networks and CPU for environment (simulink) simulation
thanks in advance
system spec
CPU - intel xeon gold 5220
RAM - 128GB
GPU - Nvidia RTX 2080
device = 'gpu';
actorOpts = rlRepresentationOptions('LearnRate',1e-4,'GradientThreshold',2,'UseDevice',device);
criticOpts = rlRepresentationOptions('LearnRate',1e-4,'GradientThreshold',2,'UseDevice',device);
agent = rlACAgent(actor,critic,agentOpts);
trainOpts = rlTrainingOptions(...
'MaxEpisodes',3e4, ...
'MaxStepsPerEpisode',1*ceil(4/(10*Ts)), ...
'ScoreAveragingWindowLength',10,...
'Verbose',true, ...
'Plots','training-progress',...
'StopTrainingCriteria','AverageReward',...
'StopTrainingValue',10005,...
'SaveAgentCriteria','AverageReward',...
'SaveAgentValue',2000);
trainOpts.UseParallel = true;
trainOpts.ParallelizationOptions.Mode = "sync";
trainOpts.ParallelizationOptions.DataToSendFromWorkers = "gradients";%for A3C
trainOpts.ParallelizationOptions.StepsUntilDataIsSent = 20;
trainOpts.ParallelizationOptions.WorkerRandomSeeds = -1;
trainOpts.StopOnError = 'off';
trainingStats = train(agent,env,trainOpts);
댓글 수: 0
답변 (1개)
Anh Tran
2020년 3월 27일
We are continuously improving GPU training performance with parallel computing in future releases. For now, I would recommend the following options to improve training speed:
댓글 수: 0
참고 항목
카테고리
Help Center 및 File Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!