How to create a neural network for Multiple Agent with discrete and continuous action?
조회 수: 2 (최근 30일)
이전 댓글 표시
Janani Sunil
2021년 4월 26일
답변: Emmanouil Tzorakoleftherakis
2021년 4월 26일
Hi All,
I am trying to create a RL model with 2 agents in my environment.
Both the observations are continuous, but Agent 1 Action is discrete and Agent 2 actions are continuous. How do I specify them while building the actor network?
%Create Action Specifications
numActions = 3;
numActions2 = 1;
actionSizes = numActions+ numActions2
numActionCombinations = 8;
S0 = [0 0 0];
S1 = [0 0 1];
S2 = [0 1 1];
S3 = [0 1 0];
S4 = [1 1 0];
S5 = [1 0 1];
S6 = [1 0 0];
S7 = [1 1 1];
actionInfo = rlFiniteSetSpec({S0,S1,S2,S3,S4,S5,S6,S7});
actionInfo2 = rlNumericSpec([numActions2 1],'LowerLimit',0.05,'UpperLimit',30);
actionInfo.Name = 'Pulse';
actionInfo2.Name = 'cRef';
net = [ featureInputLayer(obsSizes,'Normalization','none','Name','state')
fullyConnectedLayer(actionSizes,'Name','fc')
softmaxLayer('Name','actionProb') ];
actor = rlStochasticActorRepresentation(net,obsInfo,actInfo,'Observation','state');
댓글 수: 0
채택된 답변
Emmanouil Tzorakoleftherakis
2021년 4월 26일
If you want to specify the neural network structures yourself, there is nothing specific you need to do - simply create two actors and two critics, one for each action space and you are all set.
There is also the option to use the default agent feature where the neural nets are created automatically for you by only providing the observation and action space. See an example here.
댓글 수: 0
추가 답변 (0개)
참고 항목
카테고리
Help Center 및 File Exchange에서 Environments에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!