REINFORCE algorithm- unable to compute gradients on latest toolbox version
조회 수: 10(최근 30일)
표시 이전 댓글
The LSTM actor network inputs 50 timestep data of three states. Therefore a state is of dimension 3x50.
For computing gradients, the input data in the forllowing format
num_states x batchsize x N_TIMESTEPS = (3x1)x50x50.
In Reinforcement Learning toolbox version 1.3, the following line works perfectly.
% actor- the custom actor network , actorLossFunction- custom loss fn, lossData- custom variable
actorGradient = gradient(actor,@actorLossFunction,{reshape(observationBatch,[3 1 50 50])},lossData);
However, when I run the same code in the latest RL toolbox version 2.2, I get the following error:
------------------------------------------------------------------------------------------------------------------------------------------------------
Error using rl.representation.rlAbstractRepresentation/gradient
Unable to compute gradient from representation.
Error in simpleRLTraj (line 184)
actorGradient= gradient(actor,@actorLossFunction,{reshape(observationBatch,[3 1 50 50])},lossData);
Caused by:
Error using extractBinaryBroadcastData
dlarray is supported only for full arrays of data type double, single, or logical, or for full gpuArrays of
these data types.
------------------------------------------------------------------------------------------------------------------------------------------------------
I tried tracing back to the error but it get more complicated. How do I get an error for a code that works perfectly on the earlier version of RL toolbox?
댓글 수: 0
채택된 답변
Joss Knight
2022년 4월 5일
편집: Joss Knight
2022년 4월 5일
What is
underlyingType(observationBatch)
underlyingType(lossData)
?
추가 답변(0개)
참고 항목
범주
Find more on Image Data Workflows in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!