QTable reset when using train

조회 수: 1 (최근 30일)
Corrado Possieri
Corrado Possieri 2020년 5월 18일
댓글: Corrado Possieri 2020년 5월 20일
Hi,
I am using the Matlab Reinforcement Learning toolbox to train an rlQAgent.
The issue that I am facing is that the corresponding QTable, i.e., the output of the command getLearnableParameters(getCritic(qAgent)), is reset each time the train command is used.
Is it possible to avoid this reset so to train further a previously trained agent?
Thank you
Corrado

채택된 답변

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2020년 5월 19일
편집: Emmanouil Tzorakoleftherakis 2020년 5월 20일
If you stop training, you should be able to continue from where you left off. I called 'train' on the basic grid world example a couple of times in a row and the output of 'getLearnableParameters(getCritic(qAgent))' was different. You can always save the trained agent and reload it as well to make sure you don't accidentally delete it.
Update:
There is a regularization term added to the loss which causes the other entries to change slightly. To avoid this, you can type:
qRepresentation.Options.L2RegularizationFactor=0;
  댓글 수: 5
Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2020년 5월 20일
Updated my answer above with a solution - hope that helps.
Corrado Possieri
Corrado Possieri 2020년 5월 20일
Thank you Emmanouil, this solved the issue.

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by