PPO algorithm training problem in Reinforcement Learning Toolbox
조회 수: 6 (최근 30일)
이전 댓글 표시
In the PPO training algorithm , here mentioned “For each experience sequence that does not contain a terminal state, N is equal to the ExperienceHorizon option value. Otherwise, N is less than ExperienceHorizon and SN is the terminal state.” ,
Here's my question :When N is smaller than ExperienceHorizon and N is also smaller than the size of mini-batch data, and this continues for multiple consecutive episodes, When does the algorithm update the parameters in this case?
AND another one question is :When will the PPO parameter be updated under the following parameter Settings:
agentOpts = rlPPOAgentOptions(...
'ExperienceHorizon',10000,...
'MiniBatchSize',64,...
'NumEpoch',3,...)
trainOpts = rlTrainingOptions(...
'MaxEpisodes',10000,...
'MaxStepsPerEpisode',30,... )
댓글 수: 0
채택된 답변
Takeshi Takahashi
2023년 7월 5일
When N is smaller than ExperienceHorizon and N is also smaller than MiniBatchSize, the PPO agent uses N experiences to update its parameters at the end of the episode.
So, if MaxStepsPerEpisode = 30, ExperienceHorizon = 10000, and MiniBatchSize is 64, the PPO agent uses 30 or fewer experiences (when the episode terminates early) to update its parameters at the end of each episode.
댓글 수: 2
轩
2023년 12월 31일
So who deside the value N when the episode does not be stopped by reaching ExperienceHorizon and terminal state ?
Thank you for your explanation in advace.
轩
2023년 12월 31일
Maybe I have found the answer in the document Create Policies and Value Functions - MATLAB & Simulink - MathWorks Benelux
"When using PG agents, the learning trajectory length (that is the sequence of input data that the network uses for learning) for the RNN is the whole episode. For an AC agent, the NumStepsToLookAhead property of its options object is treated as the training trajectory length (except when training in parallel, in which case NumStepsToLookAhead is ignored and the whole episode is used as trajectory length). For a PPO agent, the trajectory length is the MiniBatchSize property of its options object."
추가 답변 (0개)
참고 항목
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!