Transient value problem of the variable in reward function of reinforcement learning

조회 수: 2 (최근 30일)
Hello, I encounted a problem when designing the reward function. In the simulink environment, I want to incorporate some variables in the reward function. During the training of RL agent, the varibles will converge after about 0.06s, while the agent is trained from 0s. The enable block doesn't help by putting the RL block in a subsystem.
From my understanding, it will influence the value reward function, which may result in poor trained agent. Does anyone have any suggestions regarding this questions?
Thank you very much.

채택된 답변

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2021년 3월 22일
You can put the agent block under a triggered subsystem and set it to begin training after 0.06 seconds
  댓글 수: 5
Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2021년 3월 23일
I believe it should be 40 yes - there is a counter implemented internally that keeps track of how many times the RL Agent block will run

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Environments에 대해 자세히 알아보기

제품


릴리스

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by