Use multiple GPUs for functions

조회 수: 1 (최근 30일)
Mantas Vaitonis
Mantas Vaitonis 2018년 10월 2일
편집: Matt J 2018년 10월 3일
Dear All, I had my previous question, because it was not clear, now I tried to simplify it. At my disposal there are two GPU devices (GeForce GTX 1070 Ti and GeForce GTX 1060 6GB). I would like to parallelize my calculations on both GPUs. Lets say I have 3D gpuArray and I would like to pass this data in chunks to both GPUs, just in my code function is more difficult, this is an example of what I am trying to achieve, and yes it does not work.
clear;
delete(gcp('nocreate'));
nGPUs = gpuDeviceCount();
parpool('local', nGPUs);
d1=rand(10,10,10);
d=gpuArray(d1);
parfor i =1:nGPUs
c1 = zeros(10,10,10);
c=gpuArray(c1);
for j=1:10
c(:,:,j)=d(:,:,j)*2;
end
end
der=c;
It gives temporary variable error.

채택된 답변

Matt J
Matt J 2018년 10월 2일
편집: Matt J 2018년 10월 2일
Is the question then, why do you get the temporary variable error? The reason is because the variable 'c' is created inside the parfor-loop. It is therefore a temporary variable, meaning that it has no life after the parfor loop. It is both forbidden and illogical to use a temporary variable after the for-loop as you have done at the line,
der=c;
This is because the parfor loop maintains several parallel versions of c. Every parallel worker has its own version of c which might end up carrying a different value at the end of the loop, depending on the parallel operations done to it. So, which of these versions would be assigned to der?
  댓글 수: 7
Mantas Vaitonis
Mantas Vaitonis 2018년 10월 3일
Yes I do understand that these calculations are done in parallel and that same for loop for j=1:10 is processed on both GPUs. But what would be the way if my variable d1=rand(1e8,1e8,1e8); and for j=1:1e8, but divide this for loop for both GPUs, that one is from j=1:5e7 and other GPU j=5e7:1e8, or this is not suitable for GPU? I am able to pass all data to one GPU, but if I pass it to two GPU it should result in data process speedup.
Matt J
Matt J 2018년 10월 3일
편집: Matt J 2018년 10월 3일
One way,
d1Cell={d1(:,:,1:5e7), d1(:,:,5e7+1:end)};
parfor i = 1:nGPUs
gpuDevice(i);
c=gpuArray(c1{i});
d=gpuArray(d1Cell{i});
for j=1:size(d,3)
c(:,:,j)=d(:,:,j)+i-j; %A fake i-dependent operation.
end
c1{i}=gather(c);
end

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 GPU Computing에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by