How to show the loss change of critic network or actor network when training with DDPG algorithm

조회 수: 7 (최근 30일)
How to show the loss change of critic network or actor network when training with DDPG algorithm?

답변 (1개)

Poorna
Poorna 2023년 9월 29일
Hi,
I understand that you would like to view/show the change in the loss values of actor/critic networks of a DDPG agent during training.
You can achieve this by utilizing the MonitorLogger functionality. Follow these steps:
  1. Create a “monitor” object using trainingProgressMonitor” function:
%create a monitor object
monitor = trainingProgressMonitor();
2. Create a “logger” object using the “rlDataLogger” function with the “monitor” as input:
%create a logger
logger = rlDataLogger(monitor);
3. Use the AgentLearnFinishedFcn callback property of the monitor object to log the losses. Create a custom callback function that receives a structure containing the actor and critic losses, as well as other useful information. Customize the callback function to extract and return the data you want to log.
4. At the end of the training, you can access the logged data for further analysis or visualization.
For more information on these functions, please refer to the following documentation:
Hope this Helps.

카테고리

Help CenterFile Exchange에서 Training and Simulation에 대해 자세히 알아보기

제품


릴리스

R2020a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by