How to Perform Gradient Descent for DQN Loss Function

조회 수: 2 (최근 30일)
Sherry X
Sherry X 2020년 3월 10일
편집: Sherry X 2020년 3월 10일
I'm writing the DQN from scratch, and I'm confused of the procedure of updating the evaluateNet from the gradient descent.
The standard DQN algorithm is to define two networks: . Train with minibatch, and update the with gradient descent step on
I define . When update the , I first make the , and then only update , which guarantee the . Then I update the . If I choose the feedforward train method as '', does [1] update the evalNet correctly via gradient descent?

답변 (0개)

카테고리

Help CenterFile Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by