Live Monitoring of Critic Predictions in the RL Toolbox

조회 수: 1 (최근 30일)
walli
walli 2020년 8월 17일
편집: walli 2020년 8월 17일
I'm wondering if it is possible to monitor the Q-value predictions within any critic-based RL approach using the RL toolbox? For example, having a multi-output DQN agent the internal deep NN has to be called at every step in order to evaluate all possible discrete actions given the current state sample - hence, somewhere internally there must be a Q-value prediction for every discrete action available which are then evaluated in order to find the optimal action.
However, having spend some time on the 2020a documentation I was not able to find a way accessing these internal Q-value predictions at each time step. In particular, it would be nice if the Simulink-based agent block would be able to provide these predictions for further processing and monitoring reasons during the training and deployment phase.
Does somebody have a useful hint in order to retrieve the Q-value estimates during learning?

답변 (0개)

카테고리

Help CenterFile Exchange에서 Reinforcement Learning에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by