필터 지우기
필터 지우기

Error training agent DDPG rl.util.Po​licyInstan​ce.get()

조회 수: 1 (최근 30일)
Jorge
Jorge 2020년 5월 12일
답변: katuysha 2023년 4월 10일
Hi all,
I'm trying to train my own DDPG agent for my hexapod robot the template model from the biped robot model from mathworks (biped robot).
I have already modify the simulink model to add my hexapod robot from simechanics, and try that it learns to stand up (the initial position is lay down on the ground), but when I try to train the DDPG agent I have the following error:
Error using rl.env.AbstractEnv/simWithPolicy (line 70)
An error occurred while simulating "rlClheroRobot" with the agent "rl.util.PolicyInstance.get()".
Error in rl.task.dq.ParCommTrainTask/runImpl (line 109)
[varargout{1},varargout{2}] = simWithPolicy(this.Env,this.Agent,simOpts);
Error in rl.task.Task/run (line 21)
[varargout{1:nargout}] = runImpl(this);
Error in rl.task.TaskSpec/internal_run (line 159)
[varargout{1:nargout}] = run(task);
Caused by:
Error using rl.env.SimulinkEnvWithAgent>localHandleSimoutErrors (line 689)
Invalid observation type or size.
Error using rl.env.SimulinkEnvWithAgent>localHandleSimoutErrors (line 689)
Input data dimensions must match the dimensions specified in the corresponding observation and action info specifications.
But I don't know how to solve this problem or where it is produced.
My project can be downloaded from this github, and you only have to run the live script "agente_entrenamiento.mlx".
  댓글 수: 1
一 研
一 研 2021년 8월 20일
Have you solved it? I have the same problem.Thank you!

댓글을 달려면 로그인하십시오.

답변 (1개)

katuysha
katuysha 2023년 4월 10일
You need to check that the dimensions of the observations set in the training program are the same as the number of observations in the simulink model.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by