how can I display the trained network weights in reinforcement learning agent?

조회 수: 5 (최근 30일)
Ru SeokHun
Ru SeokHun 2020년 2월 21일
댓글: Lac Duong 2020년 10월 6일
Hello,
I trained a DDPG agent by using reinforcement learning in Reinforcement Learning Toolbox.
I wanted to know the trained weight in the agentm, so after the train was finished I checked the agent variables in work space.
However, I couldn't fine any values of the weights in the variables not even 'agent' and 'evn' variable.
I know it is possible to check weights of network in Neural Network Toolbox, but is it able to access to the weights in Reinforcement Learning Toobox?
What should I do?

답변 (1개)

Anh Tran
Anh Tran 2020년 2월 21일
편집: Anh Tran 2020년 2월 21일
Hi Ru SeokHun,
In MATLAB R2019b and below, there is a 2-step process:
  1. Use getActor, getCriitic functions to gather the actor and critic representations from the trained agent.
  2. Use getLearnableParameterValues function to get the weights and biases of the neural network representation.
See the code below to get the parameters of the trained actor. You can compare these values with those of an untrained agent. Assume you have DDPG agent named 'agent'
% get the agent's actor, which predicts next action given the current observation
actor = getActor(agent);
% get the actor's parameters (neural network weights)
actorParams = getLearnableParameterValues(actor);

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by