필터 지우기
필터 지우기

Reinforcement Learning Toolbox: Not enough Room in buffer

조회 수: 3 (최근 30일)
Clemens Fricke
Clemens Fricke 2019년 7월 1일
댓글: Aysegul Kahraman 2022년 1월 30일
Problem was a missunderstanding of an example. So this error is caused by an user error that was explained in one of the comments to this issue if that comment does not apply to you, you most likely have a different issue.
Hello,
I am pretty new in the realm of RL and am using the RL Toolbox to controll a Simulink modell with the DDPG Agent.
I have 2 actions and 2 observations
My Problem is that everytime i try to train the agent I get the Error:
An error occurred while running the simulation and the simulation was terminated
Caused by:
MATLAB System block 'rlMockLoop/RL Agent/AgentWrapper' error occurred when invoking 'outputImpl' method of 'AgentWrapper'. The error was thrown from '
'/usr/local/MATLAB/R2019a/toolbox/rl/rl/+rl/+util/ExperienceLogger.m' at line 30
'/usr/local/MATLAB/R2019a/toolbox/rl/rl/+rl/+agent/AbstractPolicy.m' at line 95
'/usr/local/MATLAB/R2019a/toolbox/rl/rl/simulink/libs/AgentWrapper.m' at line 107'.
Not enough room in the buffer to store the new experiences. Make sure the bufferSize argument is big enough.
I tried to increase the agentOption ExperienceBufferLength (even to pretty high values).
Is that even the right Option I should be looking at or am I missing something?
Code snippets:
Ts ~ 0.05
actionInfo = rlNumericSpec([2 1],...
'LowerLimit',[0 0]',...
'UpperLimit',[100 100]');
actionInfo.Name = 'StromstaerkeProzent';
actionInfo.Description = 'Aout, Ain';
%% Specify Observations
observationInfo = rlNumericSpec([2 1]);
actionInfo.Name = 'pressure';
actionInfo.Description = 'DruckWasser, Druck';
agentOpts = rlDDPGAgentOptions(...
'SampleTime',Ts,...
'TargetSmoothFactor',1e-3,...
'ExperienceBufferLength',512*((10/Ts)*1000),...
'DiscountFactor',0.99,...
'MiniBatchSize',512);
agent = rlDDPGAgent(actor,critic,agentOpts);
trainingOptions = rlTrainingOptions(...
'MaxEpisodes',1000, ...
'MaxStepsPerEpisode',10/Ts, ...
'ScoreAveragingWindowLength',5,...
'Verbose',false, ...
'Plots','training-progress',...
'StopTrainingCriteria','AverageReward',...
'StopTrainingValue',-1100,...
'SaveAgentCriteria','EpisodeReward',...
'SaveAgentValue',-1100);
simOptions = rlSimulationOptions('MaxSteps',10/Ts);
experience = sim(env,agent,simOptions);
Other:
I tried to make the buffer size relative to the episode count and the length of 10s.
I really hope somebody can help me.
  댓글 수: 6
eyman ikhlaq
eyman ikhlaq 2021년 1월 3일
i am facing the same issue please send a solution
Aysegul Kahraman
Aysegul Kahraman 2022년 1월 30일
Hi,
You can try to use delay block after the action.
It solved my problem although that was not my first choice to solve this issue.

댓글을 달려면 로그인하십시오.

채택된 답변

YUCHEN Liu
YUCHEN Liu 2020년 8월 4일
Sometimes connecting the output of the agent directly back to the reward will cause this situation, maybe you need a delay module
  댓글 수: 1
Aysegul Kahraman
Aysegul Kahraman 2021년 11월 2일
I am facing the same issue and I could not find a solution. There is definitely a relationship to not having a delay block. However, I have a delay block before the reward which should be enough.
Any solution or suggestion?

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

제품


릴리스

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by