Is it possible to carry out multi agent reinforcement learning in MATLAB ?

I wish to train multiple agents(2 agents to 10 agents) in a grid map environment with same reward function.
Is it possible ? If so, suggest me some articles or books to start with !!!
I am comfortable with single agent training.
Thanks in advance !!!

 채택된 답변

Harsha Priya Daggubati
Harsha Priya Daggubati 2020년 3월 26일

0 개 추천

I understand you are looking for a way to simulate a multi-agent Reinforcement Learning environment. Unfortunately, the Reinforcement Learning Toolbox currently does not support multi-agent scenario. You would need to write your custom environment and training algorithms for such scenario.
Due to large amounts of request on this feature, the development team is actively working on the multi-agent feature now and it will be available in a future release. If you decide to write your own environment and training algorithm, the following documentation would be a good place to start:

댓글 수: 1

I had the same idea to be implemented as him, so I hope you can check out the link to help those out who have the same requirement as me

댓글을 달려면 로그인하십시오.

추가 답변 (1개)

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2020년 9월 29일

0 개 추천

As of R2020b release, Reinforcement Learning Toolbox lets you train multiple agents simultaneously in Simulink. Please see following examples for reference:
  1. Train Multiple Agents for Path Following Control
  2. Train Multiple Agents for Area Coverage
  3. Train Multiple Agents to Perform Collaborative Task
Hope that helps

댓글 수: 8

Hi, I am currently trying to train multiple agents. I trained 2 agents quite smoothly but when I increased the number of agents to 3, I started getting error messages (please see attached image). All three agents have the same observation and their own individual actions and rewards (which was the case when there were two agents). I am not sure what changed when I went from two to three agents! I followed the "Train Multiple Agents for Area Coverage" example to set things up: the only differences being that I didn't using the selectors and I didn't multiplex the actions.
I would be very grateful if someone could let me know what I'm doing wrong!
Thank you :)
Hello Shabnam,
If you are not using the selectors please ensure that the observation and action dimensions are correct for all agents. This means the observation and action signal dimensions for the agent block must match those defined in the I/O specifications when you created the agents.
Feel free to share your model and necessary files so that we can take a look. If you dont want to share them in this forum you can open a Technical Support Case here.
Hello, thanks for your reply. I managed to solve the issue but it looks like it wasn't because of incorrect observation and action dimensions.
I have been having strange issues with my multi-agent RL models. In the case of 2 agents, when I provided slightly different observations to the agents, only one of them worked the way they were supposed to while the other produced strange results (both agents produced correct results when I provided the same observation). If I increased the number of agents to two or more, I got the error messages that I mentioned above. I rechecked all connections, observation and action dimensions etc. but I couldn't see a problem anywhere. Interestingly, in both cases, when I created a new Simulink model with new blocks (there was no change in operation), things started working correctly! I had encountered a similar problem with a different simulation model a few years ago, which is why I created a new model to see if it solved the issue. I think that this may be just an issue of bugs. Do you think uninstalling and reinstalling Matlab 2020b will help?
It could be that your model was saved in a bad state. If you face any issues with multi-agent training please feel free to let us know.
I am working on a similar problem and would greatly appreciate some type of advice. My problem is very similar to "Train Reinforcement Learning Agent in Basic Grid World", however I wish to expand this problem to a multi-robot scenario with individual start and goal coordinates per agent. I have created a 26x26 gridWorld environment with obstacles, in which I have a single agent moving from a given start point to end point. I wish to add 1-10 agents to the gridWorld and train all 10 robots to finish their paths without collision. Is this possible? I have read a lot of the multi-agent examples provided, however, they seem to differ from my specific problem and I can't quite put my finger on how to accomplish my problem utilizing those examples. Any type of guidance that could direct me towards a solution would be amazing.
Note: I have also coded an A* grid priori knowledge fuction to provide each robot with a better focussed solution before training, however, I am not sure of the best way to implement this to the problem.
Thank you for your time and consideration!
@Ari Biswas Hi sir, I have created a custom environment for multi agent training using rlCreateEnvTemplate. However, while validating the env using validateEnv, I am getting the following error:
"Error using rl.env.MATLABEnvironment/validateEnvironment (line 42)
There was an error evaluating the step function.
Caused by:
Index exceeds the number of array elements (1)."
So, I figured out that it was due to expecting actions to be an array in step function. And using that for building further. I did that because I thought, after creating agent, when I will call the step function for updateing the env, I will input an array of actions by individual agents.
Could you please share your views on how to validate custom env such as this one.
Hello, I've got the same issue on multiple-agent. Did you find any solution for that?
Thank you very much, I'll appreciate your reply.
Best regards
Not yet ... Maybe in 2023b release we can expect

댓글을 달려면 로그인하십시오.

카테고리

도움말 센터File Exchange에서 Reinforcement Learning Toolbox에 대해 자세히 알아보기

제품

릴리스

R2019b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by