필터 지우기
필터 지우기

Load a Previously Trained Agent into a Simulink Reinforcement Learning Model to start training again in MATLAB 2021a

조회 수: 43 (최근 30일)
Hi,
I'm trying to continue work training a previously trained RL DDPG Agent using a Simulink model in the 2021a release of MATLAB. When I've tried loading the agent with the following code below, it appears to create a entirely new agent and ignores the behavior I saw in the previous training. Some background info on the agent:
  • The agent was trained for 200 episodes and was saved to a '.mat' file
  • The options for the training I'm trying to do with the agent are the same for this pretrained agent I'm trying to load.
  • The RL network for the actor and the critic is the same.
  • I'm saving the Experience Buffer with the Agent in this '.mat file'
  • It was trained in 2021a
Is there a way to effectively load the pretrained agent network into the RL DDPG network I use in my MATLAB Simulink Training in MATLAB 2021a or is there a future release of MATLAB that does what I need?
Thank you for your help
Code:
Pretrained_agent_flag = true;
if (Pretrained_agent_flag == true)
pretrainedagent = load('MyAgent.mat'); %Load Previous Agent .mat file
else
agent = rlDDPGAgent(actor,critic,agentOptions); %DDPG Agent
end
trainingResults = train(agent,env,trainingOptions);

답변 (1개)

Yash Sharma
Yash Sharma 2023년 10월 20일
Hi Julian,
I understand that you have a pretrained RL DDPG agent and you want to load that agent in MATLAB, when you load a pretrained RL DDPG agent using the load function, it only loads the agent object itself, not the underlying network weights.
To effectively load the pretrained agent network into the RL DDPG network in MATLAB Simulink training, you can follow these steps:
  • Save the network weights separately: Before saving the agent to a MAT file, extract the network weights from the actor and critic networks using the getLearnableParameters function and save these network weights to separate variables.
  • Load the network weights and agent configuration: When loading the pretrained agent, use the "load" function to load the network weights and agent configuration from the MAT file. Assign the loaded network weights to the actor and critic networks of a new DDPG agent.
Here is an example code on how to load RL DDPG Agent in MATLAB.
Pretrained_agent_flag = true;
if (Pretrained_agent_flag == true)
% Load the pretrained agent
pretrainedAgentData = load('MyAgent.mat');
% Extract the network weights from the loaded agent
actorWeights = getLearnableParameters(pretrainedAgentData.agent.actor);
criticWeights = getLearnableParameters(pretrainedAgentData.agent.critic);
% Create new actor and critic networks with the loaded weights
actorNetwork = setLearnableParameters (actorWeights);
criticNetwork = setLearnableParameters (criticWeights);
% Create a new DDPG agent with the loaded network weights and configuration
agent = rlDDPGAgent(actorNetwork, criticNetwork, agentOptions);
else
% Create a new DDPG agent
agent = rlDDPGAgent(actor, critic, agentOptions);
end
trainingResults = train(agent, env, trainingOptions);
Following are documentation links which I believe will help you for further reference:
Hope this helps!

제품


릴리스

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by