Direct GPU-to-GPU Communication with Parallel Computing Toolbox / SPMD
조회 수: 4 (최근 30일)
이전 댓글 표시
I am using spmd to enable parallel computing with multiple GPUs on one workstation. Basically, the GPUs do some calculation, broadcast their results, update their parameters, and iterate. The problem is, using labSend (actually, gplus in my case) to aggregate and broadcast the results is pretty slow. It is first pulling the results off of the GPU, copying to system memory, sending to other workers, then uploading to the other GPUs.
I understand that CUDA now has Peer-to-Peer memory access capability. This way, multiple-GPUs can directly access each other's memory. http://www.nvidia.com/docs/IO/116711/sc11-multi-gpu.pdf This is accomplished with a function like: cudaMemcpyPeerAsync().
Thus, I would like to have a gplus() or labSend() that copies a gpuArray directly to the memory of another GPU on another worker.
Is this possible today? If not, is it something you are working on?
Thanks, Jon
댓글 수: 0
답변 (1개)
Edric Ellis
2015년 4월 27일
편집: Edric Ellis
2015년 4월 27일
Unfortunately, as you observe, Parallel Computing Toolbox currently has no means by which to achieve this. I believe you can use the peer-to-peer memory copying across multiple processes within a single node, which means you could use the GPU MEX interface to copy data.
참고 항목
카테고리
Help Center 및 File Exchange에서 GPU Computing에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!