How to save trained Q-Network by RL-DQN?

조회 수: 3 (최근 30일)
一馬 平田
一馬 平田 2021년 10월 31일
답변: Abhiram 2025년 6월 12일
I would like to load the trained Q-Network in rlQValueRepresetation.
How can I save the pre-trained Q-network.
I know that DQN agent can be saved with rlTrainingOptions. but I could not confirm pre-trained Q-network.
Due to my lack of confirmation, if it is possible to save pre-trained Q-Network in rlTrainingOptions, could you please tell me how to load the Q-Network?

답변 (1개)

Abhiram
Abhiram 2025년 6월 12일
To save and load a trained Q-Network in rlQValueRepresentation, the Q-Network can be extracted from the agent and be saved as a MAT file. Code snippets for saving and loading a Q-Network are given:
% Extract Q-network from trained agent
qRep = getCritic(agent);
% Save the Q-network to a file
save('savedQNetwork.mat','qRep');
% Load the Q-network from file
load('savedQNetwork.mat','qRep');
% Rebuild agent from loaded Q-network (assuming agent options are available)
agentFromLoadedQ = rlDQNAgent(qRep, agentOpts);
For more information on the “save”, “load”, “rlDQNAgent” and “getCritic” functions, refer to the MATLAB Documentation:
Hope this helps!

카테고리

Help CenterFile Exchange에서 Reinforcement Learning에 대해 자세히 알아보기

제품


릴리스

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by