Can this GPU code snippet be redone without nested loops?
이전 댓글 표시
Hello, I have two matrices: matrix1 is a logical array of 1s and 0s (1000 x 800) matrix2 is a different logical array (2000 x 800)
I am essentially taking the first row of matrix 1 and calculating the row summation of common elements / total number of elements. Both of these arrays are gpuArrays. What I finding out:
for j=gpuArray.colon(1,x)
for k=gpuArray.colon(1,y)
output(j,k)=sum(matrix1(j,:) & matrix2(k,:)) / sum(matrix1(j,:) | matrix2(k,:))
end
end
Runs very fast for small values of x and y, but once x,y is large is takes exponentially longer to run on the GPU
I am investigating the use of repmat here but I am not sure how to implement. Any ideas here? Or if there is another option for to get rid of the nested for loops?
Thanks
채택된 답변
추가 답변 (1개)
Sean de Wolski
2013년 11월 11일
편집: Sean de Wolski
2013년 11월 11일
Is output preallocated?
Before the loops:
output = gpuArray.zeros(x,y);
This should speed it up dramatically.
댓글 수: 3
Amr Ragab
2013년 11월 11일
Sean de Wolski
2013년 11월 11일
편집: Sean de Wolski
2013년 11월 11일
Do matrix1 and matrix2 already live on the gpu, i.e. are they gpuArrays?
Amr Ragab
2013년 11월 11일
카테고리
도움말 센터 및 File Exchange에서 Parallel Computing에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!