RL Toolbox: DQN epsilon greedy exploration with epsilon=1 does not act random
조회 수: 3 (최근 30일)
이전 댓글 표시
Tobias Schindler
2021년 1월 25일
댓글: Tobias Schindler
2021년 10월 5일
Setup:
- Costum Simulink Environment
- DQN Agent
To get a baseline of the environment I started training with a DQN Agent with:
opt.EpsilonGreedyExploration.Epsilon=1;
opt.EpsilonGreedyExploration.EpsilonDecay=0.0;
opt.EpsilonGreedyExploration.EpsilonMin=1;
This means that the Agent should not exploit the greedy action at all.
Sstated by the documentation (https://de.mathworks.com/help/reinforcement-learning/ug/dqn-agents.html):
During each control interval, the agent either selects a random action with probability ϵ or selects an action greedily with respect to the value function with probability 1-ϵ.
--> Epsilon=1 means probability of zero to have the greedy agent. It is not clearly stated how the random action is sampled, but it should be uniform.
Now with the above setting, the DQN Agent should never exploit the greedy policy during training. However, when starting the Simulation and watching the output of the episodes, it is clear that the Agent does in fact exploit the policy and does not act random.
- What is going on here? Why does the agent not act random during training?
- Is the sampling of the actions uniform? (Not related to the epsilon=1 behavior)
- When exactly is the decay executed? I think i read somewhere in the doc that it is every training step, i.e., for DQN every time step of the simulation with the SampleTime of rlDQNAgentOptions? Would be handy to just have this information clearly stated in the part of the doc that expxlains epsilon greedy
I quite like the toolbox so far, there are just some implementation details that are a bit hard to grasp,i.e., its not 100 % clear to me how it is done by MATLAB.
댓글 수: 0
채택된 답변
Emmanouil Tzorakoleftherakis
2021년 2월 9일
편집: Emmanouil Tzorakoleftherakis
2021년 2월 9일
Hello,
Maybe I misread the question, but you are saying "when starting the Simulation and watching the output of the episodes...". Just to clarify, if you hit the "play" button in Simulink or if you use the "sim" command, exploration is out of the picture - Simulink will only do inference on the agent. Exploration is used only when you call "train".
To your other question, sampling in DQN is indeed uniform for exploration
댓글 수: 8
추가 답변 (0개)
참고 항목
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!