필터 지우기
필터 지우기

5G Handover with Reinforcement Learning, mismatch of input channels and observations in reinforcement learning representation

조회 수: 21 (최근 30일)
Hello,
This is actually my final year project, i have 0 coding knowledge on RL before this.
I am trying to create a custom RL environment in MATLAB. In this environment, I have defined my observation space as rlNumericSpec([numUE*2 1]) because I have numUE user equipment, and each has 2 coordinates (x, y). And my actions are do or not do the handover.
This is what I get when I run the code:
Error using rl.representation.rlAbstractRepresentation/validateModelInputDimension
Model input sizes must match the dimensions specified in the corresponding
observation and action info specifications.
Error in rl.representation.rlQValueRepresentation (line 47)
validateModelInputDimension(this)
Error in rlQValueRepresentation (line 130)
Rep = rl.representation.rlQValueRepresentation(Model, ...
Error in train2test (line 53)
critic = rlQValueRepresentation(criticNetwork,env.getObservationInfo(),env.getActionInfo(),'Observation',{'state'},'Action',{'action'},criticOpts);
  댓글 수: 1
Lee Xing Wei
Lee Xing Wei 2023년 7월 12일
I have tried to chage my observation many times, still get the same error. Actually i want to get the UEpositions, UEvelocity and UEBSconnections as my observations

댓글을 달려면 로그인하십시오.

답변 (1개)

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2023년 7월 17일
I suspect you did not set up your critic network properly. If you share that code snippet we can take a closer look. An alternative would be to use the default agent feature and let the software create a critic for you automatically based on provided observation and action spaces. Here is an example that assumes you want to create a DQN agent:
  댓글 수: 3
Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2023년 7월 19일
As I mentioned, you can create the agent using the default agent feature like this:
agent = rlDQNAgent(obsInfo,actInfo)
If you run this line, it won't create any errors.You can then check what the neural network look slike by doing:
critic = getCritic(agent);
criticNet = getModel(critic);
plot(criticNet);
That said, take another look at how you defined your action space. If your output is only 0 and 1, your action space is not defined correctly (it's currently a cell array of [0 1]).

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Reinforcement Learning에 대해 자세히 알아보기

제품


릴리스

R2023a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by