RL Stop training criteria

조회 수: 2 (최근 30일)
Ivo Manri
Ivo Manri 2023년 1월 25일
I have an simulink RL environment that I would like to train in real-time (with a signal from a DAQ). I placed the agent in a triggered subsystem that is triggered by non-periodic events from the DAQ (example, the agent is triggered at t=0.95, t=2.01, t=2.98 etc). I would like the agent to train for 40 minutes at a time, but to keep training the agent over multiple days.
I have noticed that the agent continues training for a given episode after it reaches the stopping critera. For example, say that I my agent to train for 3 episodes with a maximum of 10 steps per episode. If I set my stopTrainingCriteria to 5 steps, the agent will continue to train until the episode is over.
I find that this same behavior occurs with the save training criteria. If I set the save agent criteria to 5 steps, when I look at the folder where the saved agents are saved, I will see only 3 saved agents - 1 for each episode, instead of 10 saved agents.

답변 (1개)

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2023년 1월 26일
I believe that for event-based training, you need to adjust your stopping/saving criteria accordingly. For example the agent will only take a step if an event is triggered. So if you set your stopping criteria to 5 steps and the training episode does not terminate prematurely, that probably means that you have less than 5 events happening in that time frame. Same thing for saving criteria.

카테고리

Help CenterFile Exchange에서 Reinforcement Learning에 대해 자세히 알아보기

제품


릴리스

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by