agent doesn't take different actions to different states

조회 수: 5 (최근 30일)
Bryan
Bryan 2024년 6월 21일
편집: Alan 2024년 7월 4일
Hello everyone,
I have two issues:
  1. I wasn't able to set up the environment so that the agent takes 24 different actions over the course of a day, meaning the agent takes one action every hour. As a workaround, I decided to train agents by the hour.
  2. The second issue, which is the reason for my question, arises after training the agent. When I test the efficiency of its decision-making and run the simulation part of the RL Toolbox, I notice that the agent always takes the same action regardless of the state of the environment. This leads me to believe that the training process determines the best action for a set of states, which is not what I want. I want the agent to take the correct action for different states. I've been analyzing my environment code but can't figure out why the agent behaves this way.
Thank you in advance.
Bryan
  댓글 수: 3
Bryan
Bryan 2024년 6월 23일
Thank you for your observation.
Regarding the first issue, I have 5 objects that can vary between -1 and 1, so they are not discrete. Thus, I have defined my actions as follows:
ActionInfo = rlNumericSpec([5 1], 'LowerLimit', [-1; -1; -1; -1; -1], 'UpperLimit', [1; 1; 1; 1; 1]);
I understand that if we consider 24 actions, it should be defined as 5 by 24:
ActionInfo = rlNumericSpec([120 1], ...
Isn't this action dimension too large?
Another option I considered is changing the observation dimension, which originally is:
ObservationInfo = rlNumericSpec([1 99])
To:
ObservationInfo = rlNumericSpec([24 99])
The problem with this option comes from the "step" function. While I can obtain the observation in "reset" without issues, in "step" I cannot correctly define taking a different action each hour to get the next observation.
Regarding the second issue, during training, I displayed the actions taken and indeed, different actions are taken until the end of the training, where the same action is taken despite different states. As for the network architecture, it was created by the toolbox, so I wouldn't know how to respond to your comment. Therefore, I am attaching an image of the actor and critic, and the training graph.
Thank you very much in advance.
Bryan
Alan
Alan 2024년 7월 4일
편집: Alan 2024년 7월 4일
Hi Bryan,
Could you describe your environment a bit more? The following is some information I would like to know:
  1. What happens in each step of the episode? Does a step span an hour or 24h?
  2. How have you modeled your reward function? Does it incentivize the agent well?
  3. What agent are you using?
It would be great if you can share the environment file and the train script as well.
Regards.

댓글을 달려면 로그인하십시오.

답변 (0개)

카테고리

Help CenterFile Exchange에서 Agents에 대해 자세히 알아보기

제품


릴리스

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by