How can I optimize GPU usage while training multiple RL PPO Agents using multiple GPUs?

조회 수: 11 (최근 30일)

I wish to train multiple PPO agents asynchronously and using multiple GPUs. What is the best way to optimize GPU and CPU resources to achieve this?

채택된 답변

MathWorks Support Team
MathWorks Support Team 2024년 3월 6일
If the network size is small, the best way to train would be to just train on CPU in a parallel pool instead of using a GPU, with an appropriate number of workers. This may be the most effective workaround considering that PPO tends to be better with larger training datasets and network sizes may not be big enough to impact a huge change by training on GPU instead.
If training on GPUs, please ensure that you restrict the parallel pool worker count to the same number as the number of GPUs available. This way, each worker can access a unique GPU and perform training. For more information on training using multiple GPUs, please refer to the following page:
With reference to the information in the above link, please keep the following additional information in mind:
  1. In your "rlTrainingOptions" object, if "UseParallel" is set to "True", and the actor and critic are set to use GPU, then MATLAB will automatically use multiple GPUs for training. In this case, calling "train" in a "parfor" or "spmd" is not supported.
  2. If in the "rlTrainingOptions" object, "UseParallel" is set to "False" and the actor and critic are set to use GPU, you may call "train" in a "parfor" loop.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Parallel and Cloud에 대해 자세히 알아보기

제품


릴리스

R2022b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by