Non-linearity errors and zero-crossing errors while training a RL Agent

조회 수: 13 (최근 30일)
A few days ago I posted a question about the training of an agent applied to the control of a nonlinear model of a three-degree-of-freedom aircraft. During training, randomly, I get this warning
The output port signal type of 'RL_Training_TECS_Model/Environment System/State Propagator/Forces, Moments, 3DoF/Aerodynamic Coefficients, 3DoF/Datcom Aerodynamic Model' is real (non-complex), however, the evaluated output is complex. Consider setting the 'OutputSignalType' to complex
followed by this error
Error using rl.internal.train.OffPolicyTrainer/run_internal_/nestedRunEpisode (line 284)
An error occurred while running the simulation for model 'RL_Training_TECS_Model' with the following RL agent blocks:
RL_Training_TECS_Model/RL TECS Alt Hold
Error in rl.internal.train.OffPolicyTrainer/run_internal_ (line 351)
out = nestedRunEpisode(policy);
Error in rl.internal.train.OffPolicyTrainer/run_ (line 40)
result = run_internal_(this);
Error in rl.internal.train.Trainer/run (line 8)
result = run_(this);
Error in rl.internal.trainmgr.OnlineTrainingManager/run_ (line 112)
trainResult = run(trainer);
Error in rl.internal.trainmgr.TrainingManager/run (line 4)
result = run_(this);
Error in rl.agent.AbstractAgent/train (line 86)
trainingResult = run(tm);
Caused by:
Error using rl.env.internal.reportSimulinkSimError (line 29)
Simulink will stop the simulation of model 'RL_Training_TECS_Model' because the 1 zero crossing signal(s) identified below caused 1000 consecutive zero crossing events in time interval between 7.6446467665241798e-11 and 7.6446467670045701e-11.
-------------------------------------------------- ------------------------------
Number of consecutive zero-crossings: 1000
Zero-crossing signal name : SwitchCond
Block type: Switch
Block path : 'RL_Training_TECS_Model/Environment System/State Propagator/Forces, Moments, 3DoF/Aerodynamic Coefficients, 3DoF/Switch'
-------------------------------------------------- ------------------------------
or this one
Error using rl.internal.train.OffPolicyTrainer/run_internal_/nestedRunEpisode (line 284)
An error occurred while running the simulation for model 'RL_Training_TECS_Model' with the following RL agent blocks:
RL_Training_TECS_Model/RL TECS Alt Hold
Error in rl.internal.train.OffPolicyTrainer/run_internal_ (line 351)
out = nestedRunEpisode(policy);
Error in rl.internal.train.OffPolicyTrainer/run_ (line 40)
result = run_internal_(this);
Error in rl.internal.train.Trainer/run (line 8)
result = run_(this);
Error in rl.internal.trainmgr.OnlineTrainingManager/run_ (line 112)
trainResult = run(trainer);
Error in rl.internal.trainmgr.TrainingManager/run (line 4)
result = run_(this);
Error in rl.agent.AbstractAgent/train (line 86)
trainingResult = run(tm);
Caused by:
Error using rl.env.internal.reportSimulinkSimError (line 29)
Solver encountered an error while simulating model 'RL_Training_TECS_Model' at time 1.140683144978552e-08 and cannot continue. Please check the model for errors.
Error using rl.env.internal.reportSimulinkSimError (line 29)
Nonlinear iteration is not converging with step size reduced to hmin (4.05252E-23) at time 1.14068E-08. Try reducing the minimum step size and/or relax the relative error tolerance.
At the start of each episode, the reset function trims the aircraft to a safe area I specify and then linearizes. What could these errors be due to?

채택된 답변

Maneet Kaur Bagga
Maneet Kaur Bagga 2024년 7월 12일
Hi,
I understand that you are encountering error during the training of your reinforcement learning agent for the non linear aircraft model. Please refer to the following as a possible workaround to debug and resolve the error:
1. Error in "Zero Crossing" and "Switch" block:
  • Go to Model Settings > Solver > Zero crossing control and modify the zero crossing detection algorithm settings in Simulink.
  • Re-evaluate switch block so that it does not create rapid oscillations leading to frequent zero crossings.
2. Nonlinear Solver Convergence
  • Adjust the solver settings to help with convergence, increase the relative and absolute error tolerance and the minimum step size.
  • The model should be stable so that it does not introduce high frequency dynamics or instability.
Please refer to the following File Exchenge link for further understanding:
Hope this helps!

추가 답변 (1개)

Mxolisi
Mxolisi 2024년 7월 26일
I am actually getting the same error but it a bit tricky for me to follow your instruction:
An error occurred while running the simulation for model 'RLmxolisifinal' with the following RL agent blocks:
out = nestedRunEpisode(policy);
result = run_internal_(this);
result = run_(this);
trainResult = run(trainer);
result = run_(this);
trainingResult = run(tm);
Caused by:
Unable to find system or file 'rRLmxolisifinal'.

카테고리

Help CenterFile Exchange에서 Reinforcement Learning에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by