How can I extract a trained RL Agent's network's weights and biases?
조회 수: 14 (최근 30일)
이전 댓글 표시
How can I extract a trained RL Agent's network's weights and biases?
My network is:
statePath = [
imageInputLayer([numObservations 1 1], 'Normalization', 'none', 'Name', 'state')
fullyConnectedLayer(NumNeuron, 'Name', 'CriticStateFC1')
reluLayer('Name', 'CriticRelu1')
fullyConnectedLayer(NumNeuron, 'Name', 'CriticStateFC2')];
actionPath = [
imageInputLayer([1 1 1], 'Normalization', 'none', 'Name', 'action')
fullyConnectedLayer(NumNeuron, 'Name', 'CriticActionFC1')
reluLayer('Name', 'ActorRelu1')
fullyConnectedLayer(NumNeuron, 'Name', 'CriticActionFC2')];
commonPath = [
additionLayer(2,'Name', 'add')
reluLayer('Name','CriticCommonRelu')
fullyConnectedLayer(1, 'Name', 'output')];
criticNetwork = layerGraph(statePath);
criticNetwork = addLayers(criticNetwork, actionPath);
criticNetwork = addLayers(criticNetwork, commonPath);
criticNetwork = connectLayers(criticNetwork,'CriticStateFC2','add/in1');
criticNetwork = connectLayers(criticNetwork,'CriticActionFC2','add/in2');
% set some options for the critic
criticOpts = rlRepresentationOptions('LearnRate',learing_rate,...
'GradientThreshold',1);
% create the critic based on the network approximator
critic = rlQValueRepresentation(criticNetwork,obsInfo,actInfo,...
'Observation',{'state'},'Action',{'action'},criticOpts);
agent = rlDQNAgent(critic,agentOpts)
trainingStats = train(agent,env,trainOpts);
After training, I'd like to get the network's trained weights and biases.
댓글 수: 0
채택된 답변
Anh Tran
2020년 3월 27일
편집: Anh Tran
2020년 3월 27일
You can get the parameters from the trained's critic representation for DQN agent. In MATLAB R2020a, see getLearnableParameters and getCritic functions (function name changes a bit since R2019b). You can follow similar steps to get the actor's parameters from actor-based agent like DDPG or PPO.
critic = getCritic(agent);
criticParams = getLearnableParameters(critic);
댓글 수: 6
Francisco Serra
2023년 12월 14일
轩
2024년 1월 5일
@Francisco Serra I have the same need. I find a silly method: save the agent after each episode and use "getLearnableParameters" to print the parameter of each agent.
추가 답변 (0개)
참고 항목
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!