Using RL, How to train multi-agents such that each agent will navigate from its initial position to goal position avoiding collisions?

조회 수: 2 (최근 30일)
Let's assume there are a set of agents that are spread into 3d cartesian space. A trajectory should be generated for each agent such that if an agent would follow its trajectory while heading to the goal waypoint, no collision would happen with other agents. Any guidance to solve such a task would be highly appreciated

답변 (1개)

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2021년 3월 5일
편집: Emmanouil Tzorakoleftherakis 2021년 3월 5일
It's possible that the scenario you described can be solved by training a single agent, and then "deploying" that trained agent to all uavs/uuvs in your fleet. That would make the problem easier and less expensive to train. For a 2D example, take a look at this.
  댓글 수: 3
Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2021년 3월 6일
I think it's a matter of what inputs you provide to the policy and the coordinate system you use (although I was thinking the scenario where each agent has its own sensors). If you only use odometry data from all agents, I guess you could transform it to distance from each nearby agent (include heading/bearing probably) and feed all this info into the policy.
Steve Jessy
Steve Jessy 2021년 3월 6일
The coordinate system in which the agents are acting is 2D cartesian coordinate system. Yes I can access the distance from an agent to all the other agents in the space. I'd like to kindly ask you if you can provide an example/code in which the multi-agent system is trained based on odom data

댓글을 달려면 로그인하십시오.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by