Make double loop run on GPU
이전 댓글 표시
Hi
I have the following code that I want to run somehow on the GPU.
In short, I have a 2D array (an MRI image). Starting from a user-defined centre-point, the the image is divided into radial concentric "rings" of specified width (f.eks. 0.1mm) and total number of pixels and total signal intensity (pixel value) in each ring are recorded in separate output arrays (called Profile and SumBinSignal). The code below does this using 2 for loops.
for X= Xmin:1:Xmax
for Y= Ymin:1:Ymax
if ~isnan(Image(Y,X)) %NaN pixels are just ignored
Dist = Resolution * ( (X-CentX)^2 + (Y-CentY)^2 )^0.5; %distance of point to centre measured in mm
TargetBin= ceil(Dist * BinDensity); %convert distance from mm to "bins/rings"
if (TargetBin>=1) && (TargetBin<=MaxTargetBin)
Profile(TargetBin)= Profile(TargetBin)+1; %record number of pixels in bin/ring
SumBinSignal(TargetBin)= SumBinSignal(TargetBin) + Image(Y, X); % record total signal in ring/bin
end
end
end
end
Now I know that in order to run on the GPU, I either must vectorize the calculations or use "arrayfun". But I really cant manage to do either. I maganged to vectorize the calculations only partly with no great speed advantage. The arrayfun approach I dont know even how to start with.
Help will be highly appreciated
best regards, Rozh
채택된 답변
추가 답변 (0개)
카테고리
도움말 센터 및 File Exchange에서 Big Data Processing에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!