Comprehension problem: Unable to reset RL Agent (DDPG)

조회 수: 1 (최근 30일)
Tobias Michl
Tobias Michl 2022년 6월 13일
답변: Poorna 2023년 9월 1일
I have trained an RL agent and can confirm that its behavior is close to a possible solution. Now I want to reset the knowledge of the agent (empty experience buffer, reset weights of the neural network of critic and actor) and therefore used
agent = reset(agent).
I can confirm the empty experience buffer, but the behavior is still the same. What have I missed here and how can I reset the policies?
And no, I did not save the progress of the agent (consciously).
Many thanks in advance!

답변 (1개)

Poorna
Poorna 2023년 9월 1일
Hi,
I understand that you would like to reset the agent by using the “reset” function. However, it should be noted that using the reset function on agent will not reset the weights.
To reset the weights, you must explicitly reinitialize the actor and critic networks of the agent.
You can do as below:
agent = reset(agent);
critic = rlQValueFunction(criticNet,obsInfo,actInfo);
actor = rlContinuousDeterministicActor(actorNet,obsInfo,actInfo);
agent.critic = critic;
agent.actor = actor;
You can modify the initialization of critic and actor as required.
Hope this helps!

카테고리

Help CenterFile Exchange에서 Reinforcement Learning에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by