Error while creating DDPG RL agent

조회 수: 2 (최근 30일)
Muhammad Nadeem
Muhammad Nadeem 2023년 10월 16일
댓글: Muhammad Nadeem 2023년 10월 17일
Hello Everyone,
I am trying to train an agent for LQR type control. My observation are 59x1 vector of states and my control input is 6x1 vector. I am testing the agent to get just action for some random observation, but i am getting an error as follows:
Unable to evaluate function model.
Caused by:
Error using deep.internal.network.dlnetwork/setLearnables
Layer 'fc': Invalid Learnables. Expected input to be of size [6 59], but it is of size [1 59].
My simplified code is as follows:
%% Critic neural network
obsPath = featureInputLayer(obsInfo.Dimension(1),Name="obsIn");
actPath = featureInputLayer(actInfo.Dimension(1),Name="actIn");
commonPath = [
concatenationLayer(1,2,Name="concat")
quadraticLayer
fullyConnectedLayer(1,Name="value", ...
BiasLearnRateFactor=0,Bias=0)
];
% Add layers to layerGraph object
criticNet = layerGraph(obsPath);
criticNet = addLayers(criticNet,actPath);
criticNet = addLayers(criticNet,commonPath);
% Connect layers
criticNet = connectLayers(criticNet,"obsIn","concat/in1");
criticNet = connectLayers(criticNet,"actIn","concat/in2");
criticNet = dlnetwork(criticNet);
critic = rlQValueFunction(criticNet, ...
obsInfo,actInfo, ...
ObservationInputNames="obsIn",ActionInputNames="actIn");
getValue(critic,{rand(obsInfo.Dimension)},{rand(actInfo.Dimension)})
%% Actor neural network
Biass = zeros(6,1); % no biasing linear actor
actorNet = [
featureInputLayer(obsInfo.Dimension(1))
fullyConnectedLayer(actInfo.Dimension(1), ...
BiasLearnRateFactor=0,Bias=Biass)
];
actorNet = dlnetwork(actorNet);
actor = rlContinuousDeterministicActor(actorNet,obsInfo,actInfo);
agent = rlDDPGAgent(actor,critic);
getAction(agent,{rand(obsInfo.Dimension)}) %% getting error while executing this line of command
%%

채택된 답변

Sam Chak
Sam Chak 2023년 10월 17일
Error using deep.internal.network.dlnetwork/setLearnables
Layer 'fc': Invalid Learnables. Expected input to be of size [6 59], but it is of size [1 59].
The reason for the error you're encountering is likely due to how you've set up the learning architecture, resembling a control architecture. In linear control systems, the matrix differential equation can be expressed in the vector field form known as the state-space system:
.
Here, '' is a mathematically state-dependent function, something like f(x), referred to as the state-feedback control input, and it's given by:
.
When we substitute '' into the state-space equation, it becomes:
.
This can be simplified by factoring the state vector '':
.
Now, the equation size is perfectly conserved.
From the perspective of the RL agent, it expects the learnable parameters to be of size , which actually represents the LQR gain matrix ''. This is because the control vector '', which has a size of , is already assimilated into the state-space as the LQR, taking the form of state-feedback.
  댓글 수: 2
Muhammad Nadeem
Muhammad Nadeem 2023년 10월 17일
Perfect, thank you so much for the detailed explanation. But how do i fix it? any guidance will be highly appreciated.
Thanks
Muhammad Nadeem
Muhammad Nadeem 2023년 10월 17일
Fixed it, I initialized the actor to be 6x56 matrix and now it runs. Thank you so much again for the help :)

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

제품


릴리스

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by