Where is the actual storage location of the RL agent's weights.

조회 수: 2 (최근 30일)
Dmitriy Ogureckiy
Dmitriy Ogureckiy 2023년 6월 29일
When I trained RL agent, I have got several Agents files such that:
After I loaded one of them to the workspacec I get this data:
But in saved_agent that is used in the simulation next there is no some weights or information that can be used in SImulink modeling.
--------
My question is: Where is exact location of weights of Networks or how the simulation happens ?
--------
P.S. For example I want implements this trained RL NEtwork on real robot, how I make this without weights ?

답변 (1개)

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2023년 7월 5일
Hello,
You can implement the trained policy with automatic code generation, e.g. with MATLAB Coder, Simulink Coder and so on. You don't have to know the weights for that, the code is generated automatically. The following two links provide additional info:
That said, if you still want to take a look at the trained weights, you need to extract the neural network from the agent. You can do this as shown here:
Hope this helps
  댓글 수: 2
Dmitriy Ogureckiy
Dmitriy Ogureckiy 2023년 7월 17일
Thank you, but my question was how this functions getcritic and getactor get the weight from the agnet, if it don't contain they ?
Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2023년 7월 17일
편집: Emmanouil Tzorakoleftherakis 2023년 7월 17일
The agent "includes" the neural networks, which "include" the weights. Just because you can see the weights from the neural network object does not mean you can view them from the agent object. It depends how the code of the classes is structured. The examples I shared show how you can get the weights from the agent.

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Reinforcement Learning에 대해 자세히 알아보기

제품


릴리스

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by