DDPG current controller for RL load shows steady-state offset in id/iq after training (adapted from "Train TD3 Agent for PMSM Control")

조회 수: 30 (최근 30일)
I adapted the official example “Train TD3 Agent for PMSM Control” (can be found here) to a a simple current controller for RL load and trained a very similar DDPG agent. Training looks stable (reward converges but not zero), but when I run the model after training I see a steady-state offset in both id and iq.
The official TD3 PMSM example also shows a small steady-state current offset and states it’s within ~2%. My DDPG variant exhibits the same behavior but with higher offset. I’d like guidance on eliminating the offset (or best practice to do so) rather than accepting the offset.
I have have also uploaded the three modified files as well for this simple current controller.

답변 (0개)

카테고리

Help CenterFile Exchange에서 Reinforcement Learning에 대해 자세히 알아보기

제품


릴리스

R2025a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by