Simulink interruption during RL training
이전 댓글 표시
Hey everyone,
Anyone who has used reinforcement learning (RL) to train on physical models in Simulink knows that during the initial training phase, random exploration often triggers assertions or other instabilities that can cause Simulink to crash or diverge. This makes it very difficult to use the official train function provided by MathWorks, because once Simulink crashes, all the RL experience (replay buffer) is lost—essentially forcing you to start training from scratch each time.
So far, the only workaround I’ve found is to wrap the training process in an external try-catch block. When a failure occurs, I save the current agent parameters and load them again at the start of the next training run. But as many of you know, this slows down training by 100x or more.
Alternatively, one could pre-train on a simpler case and then fine-tune on the full model, but that’s not always feasible.
Has anyone discovered a better way to handle this?
채택된 답변
추가 답변 (0개)
카테고리
도움말 센터 및 File Exchange에서 Training and Simulation에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!