PPO convergence guarantee in RL toolbox

조회 수: 7 (최근 30일)
Haochen
Haochen 2024년 6월 8일
답변: Karan Singh 2024년 6월 17일
Hi,
I am testing my environment using the PPO algorithm in RL toolbox, I recently viewed this paper: https://arxiv.org/abs/2012.01399 which listed some assumptions on the convergence guranteen of PPO, some of them are for the environment itself (like the transition kernel...) and some are for the functions and parameters of the algorithm (like the learning rate alpha, the update function h...)
I am not sure if the PPO algorithm in the RL toolbox satisfies the assumptions of the convergence for the functions and parameters of the algorithm, because I did not find any direct mentioning of convergence in the official mathwork website, so I wonder how the algorithm is designed such that convergence is being considered.
Do I need to look into the train() function to see how those parameters and functions are designed?
Thank you

채택된 답변

Karan Singh
Karan Singh 2024년 6월 17일
Hi Haochen,
The Proximal Policy Optimization algorithm in MATLAB's Reinforcement Learning Toolbox is based on the foundational principles from the original PPO papers by Schulman et al. (2017), as referenced in the documentation (https://www.mathworks.com/help/reinforcement-learning/ug/proximal-policy-optimization-agents.html).
It's crafted to adhere to the core assumptions necessary for the algorithm's convergence under certain conditions. However, the success of PPO, like many RL algorithms, hinges on several factors, including hyperparameter settings, the environment's complexity, and the implementation details.
Regarding the source code, accessing the detailed internals of the implementation might not be possible.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Sequence and Numeric Feature Data Workflows에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by