The RL-Agent's cumulative reward keeps overflowing

조회 수: 5 (최근 30일)
Ronny Landsverk
Ronny Landsverk 2023년 2월 17일
답변: Ashu 2023년 2월 22일
Adapting the 'rlwatertank' example, my cumulative reward keeps overflowing.
The original example has a 'StopTrainingValue' of 800, reached before episode 200, but in my adapted example, I cannot get past a value of 128.
I'm pretty sure that the reason is due to an overflow in the 'accumulate_reward' subsystem in the 'RL-Agent' Simulink block which does not occur in the original example.
How do I fix this issue ?

답변 (1개)

Ashu
Ashu 2023년 2월 22일
It is my understanding that you are trying to adapt the 'Water Tank Simulink Model' to train your agent and your cumulative rewards are overflowing.
I assume that you are using the default 'rlTrainingOptions' which is as follows
trainOpts = rlTrainingOptions(...
MaxEpisodes=5000, ...
MaxStepsPerEpisode=ceil(Tf/Ts), ...
ScoreAveragingWindowLength=20, ...
Verbose=false, ...
Plots="training-progress",...
StopTrainingCriteria="AverageReward",...
StopTrainingValue=800);
'StopTrainingCriteria' is set to "AverageReward" to stop training when the average reward over the last "ScoreAveragingWindowLength" (which is 20 episodes here) exceeds the 'StopTrainingValue' (which is 800.)
Now in your case, within 128 episodes the 'AverageRewards' overshoots the value of 800 over 20 consecutive episodes, hence stopping the training.
To overcome this you can try the following points -
  1. Try increasing the maximum value of the accumulate_reward subsystem in the RL-Agent Simulink block to allow for larger reward values.
  2. Experiment with the value of 'MaxStepsPerEpisode', which will result in less frequent updates to the rewards.
  3. Additionally, you can try adjusting the hyperparameters of your reinforcement learning algorithm to better fit your problem. For example, you can try reducing the learning rate or increasing the discount factor, which may help stabilize the learning process and prevent reward overflow.
  4. It may also be helpful to monitor the reward signal during training to identify any other issues that may be causing the overflow. You can do this by setting 'Verbose=true' in the 'rlTrainingOptions', which will display the reward and other metrics during training.
Finally, it's worth noting that the choice of the 'StopTrainingValue' is problem-dependent and may need to be adjusted depending on the specific requirements of your application.
You can refer to the following documentation to learn more about Water Tank Reinforcement Learning Model
To learn more about creating a Simulink Environment and Training an Agent, refer this document

카테고리

Help CenterFile Exchange에서 Reinforcement Learning에 대해 자세히 알아보기

제품


릴리스

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by