How can I set the initial value of action space while using Simulink DDPG Agent?

조회 수: 29 (최근 30일)
lei wang
lei wang 2024년 11월 20일 14:48
답변: Shlok 2024년 11월 29일 6:52
I got a robot model in Simulink, now I want to train the robot using DDPG Agent.
My question is, how can I set the initial value of action space? I want the action starts from some specefic value such as zero.

답변 (1개)

Shlok
Shlok 2024년 11월 29일 6:52
Hi Lei,
DDPG agents are designed to operate in continuous action spaces. Hence, to create a custom continuous action space for the DDPG agent, you can use the “rlNumericSpec” function. rlNumericSpec” helps in creating specifications object for a numeric action or observation channel. By setting the “LowerLimit” to zero, you can specify that the action starts from there.
Here is a sample code:
actionInfo = rlNumericSpec([1,1], 'LowerLimit', 0, 'UpperLimit', 1);
observationInfo = ... // define observation specifications
agtInitOpts = ... // define agent options
agent = rlDDPGAgent(observationInfo,actionInfo,agtInitOpts);
To know more about “rlNumericSpec function, you can refer to the following MATLAB Answer and documentations:

카테고리

Help CenterFile Exchange에서 Reinforcement Learning에 대해 자세히 알아보기

제품


릴리스

R2024b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by