How to distribute computation on GPU vector-wise?
조회 수: 6 (최근 30일)
이전 댓글 표시
Hi,
I am trying to accelerate a specific funtion by assigning each row of a matrix to one GPU core and have that core processing that row and returning a new matrix. Lets say my input matrix is n by m, I want the computation to be distributed on n cores, while each of the n cores returns a matrix of the size k by m. The computation applied to each row is quite complicated, but only functions supported by the GPU are required.
As I understand this, arrayfun can only be used for single element operations, not arrays. The individual elements in one row of the input matrix, however, cannot be computed individually. I think pagefun and bsxfun also won't work, because they do not support self written functions. Is there any way to proceed like this in Matlab without the need to implement the entire code in cuda?
Thanks!
댓글 수: 0
답변 (2개)
Joss Knight
2017년 4월 20일
You can loop over and read multiple entries in an input array (as an up-value variable) inside arrayfun, but you can't loop over and assign to elements of an output array. There is no general way to do this in MATLAB code.
Your best bet is to tell us what you're trying to do and and we can how a combination of vectorized MATLAB functions and possible use of pagefun can give you what you want without you having to write custom CUDA.
댓글 수: 0
참고 항목
카테고리
Help Center 및 File Exchange에서 GPU Computing에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!