Why does rlQValueRepresentation always add a Regression Output (RepresentationLoss) layer to the end of the network?

조회 수: 1 (최근 30일)
I have noticed that if I create a critic using rlQValueRepresentation it includes a Regression Output (named RepresentationLoss) layer. I would like to understand why is this always the case and what is the purpose of that layer. I tried reading documentation on it but I did not find any on this subject particularly.
Also, when analyzing this "loss" layer, does not seem to have any output, so I'm very confused about it. Could you please help clarify this?
Thanks in advance!
Here is the code I used to see the differences:
env = rlPredefinedEnv("CartPole-Discrete");
obsInfo = getObservationInfo(env);
actInfo = getActionInfo(env);
dnn = [
featureInputLayer(obsInfo.Dimension(1),'Normalization','none','Name','state')
fullyConnectedLayer(24,'Name','CriticStateFC1')
reluLayer('Name','CriticRelu1')
fullyConnectedLayer(24, 'Name','CriticStateFC2')
reluLayer('Name','CriticCommonRelu')
fullyConnectedLayer(length(actInfo.Elements),'Name','output')];
figure
plot(layerGraph(dnn))
title('Original network');
critic = rlQValueRepresentation(dnn,obsInfo,actInfo,'Observation',{'state'});
criticmodel = getModel(critic);
figure;
plot(criticmodel);
title('Critic network');
% what are the outputs of this layer?
criticmodel.Layers(7, 1).NumOutputs

답변 (0개)

카테고리

Help CenterFile Exchange에서 Sequence and Numeric Feature Data Workflows에 대해 자세히 알아보기

제품


릴리스

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by