Reinforcement Learning Training Algebraic Loop Delay Blocks

Hi all,
I set up a RL training with a simscape model and I needed to use delay blocks to avoid algebraic loop. However, this causes the following problems:
  1. The simulation is terminated if an undesired condition happens. Due to the delay block, the simulation was terminated in the next sample time (instead of immediately). This would cause the RL agent to view the wrong experience tuples as the ones that caused the termination, hence record the penalty wrongly.
  2. All experience tuples would be delayed by one sample time, therefore the recorded 'action' and 'reward' do not correspond to the exact sample time.
What can be done to solve this problem?

댓글 수: 3

It's also possible to break the algebraic loop involving Simscape using a first-order transfer function with a small time constant. It can be justified as some kind of sensor delay, assuming the sensor feedback is part of what's causing the algebraic loop (usually it is :). This may solve the problem of extra step/sample time you have.
I'm getting the same problem when using the memory block or other blocks to solve the algebraic loop. I've tried to use a first-order transfer function, but it didn't work well. Have you fixed your problem, Tech Logg, and any other recommendations from MATLAB staff?
Thanks
Hi Trong,
Sorry for the super late reply. I just found your comment today. I opted to use a first-order transfer function and it solved the problem for me. I made sure that the delay from the first order function was lower than the agent's sample time. Doing so, the observations were very close to the actual state, which was sufficient for me.
Kind regards,
Tech Logg

댓글을 달려면 로그인하십시오.

답변 (0개)

제품

릴리스

R2021a

질문:

2021년 4월 28일

댓글:

2021년 10월 12일

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by