How do I properly substitute rlRepresentation with rlValueRepresentation, rlQValueRepresentation, rlDetermin​isticActor​Representa​tion, and rlStochast​icActorRep​resentatio​n?

조회 수: 14 (최근 30일)
I am using MATLAB r2020a where rlRepresentation is "not recommended." As a result, I am forced to substitute it with the criritics or actors in the following compatibility guide (https://www.mathworks.com/help/reinforcement-learning/ref/rlrepresentation.html#mw_a6277225-fecf-4d97-9549-1fc4799bf5b6). I tried replacing rlRepresentation with rlValueRepresentation, rlQValueRepresentation, rlDeterministicActorRepresentation, and rlStochasticActorRepresentation (though I left rlRepresentationOptions as is where it came up). They all resulted in errors, and rlValueRepresentation and rlStochasticActorRepresentation had the fewest (and the same) errors:
Error using rlStochasticActorRepresentation (line 93)
Too many input arguments.
Error in createDDPGNetworks (line 51)
critic = rlStochasticActorRepresentation (criticNetwork,criticOptions, ...
Since both this critic and actor have the same error, I think it might have something to do with rlRepresentationOptions since it gives properties to the actors or critics (as far as I understand).
For reference, I am trying to emulate this project (https://www.youtube.com/watch?v=6DL5M9b2j6I) in MATLAB r2020a.
Any help is appreciated.

답변 (4개)

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2020년 7월 17일
It would be helpful if you pasted the exact MATLAB code you are typing to see what the problem is. I suspect you simply changed the method name, which is why you get the error you are seeing. Have a look at the documentation page for the respective method you want to use (rlValueRepresentation etc) and make sure the order and number of arguments matches the doc.

Salma Khaled
Salma Khaled 2021년 8월 5일
  1. Create an actor using an rlDeterministicActorRepresentation object.
  2. Create a critic using an rlQValueRepresentation object
https://www.mathworks.com/help/reinforcement-learning/ug/ddpg-agents.html

Giampiero Campa
Giampiero Campa 2021년 8월 6일
편집: Giampiero Campa 2021년 8월 6일
The table in this page might help.

ali
ali 2023년 11월 14일
Hi, I have the same problem. you must use "rlQValueRepresentation" for critic and "rlDeterministicActorRepresentation" for the actor. Also, the option for each network must be in last option of the function.
If you are using RL in biped robot example, change the line51 with the below code:
critic = rlQValueRepresentation(criticNetwork,env.getObservationInfo,...
env.getActionInfo,'Observation',{'observation'},...
'Action',{'action'},criticOptions);
and line 88 for actor:
actor = rlDeterministicActorRepresentation(actorNetwork,env.getObservationInfo,...
env.getActionInfo,'Observation',{'observation'},...
'Action',{'ActorTanh1'},actorOptions);

카테고리

Help CenterFile Exchange에서 Programmatic Model Editing에 대해 자세히 알아보기

제품


릴리스

R2020a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by