Matrix multiplication optimization using GPU parallel computation

조회 수: 47(최근 30일)
Dear all,
I have two questions.
(1) How do I monitor GPU core usage when I am running a simulation? Is there any visual tool to dynamically check GPU core usage?
(2) Mathematically the new and old approaches are same, but why is the new approach is 5-10 times faster?
%%% Code for new approach %%%
M = gpuArray(M) ;
for nt=1:STEPs
if (there is a periodic boundary condition)
M = A1 * M + A2 * f * M
% diffusion
M = A1 * M ;
  댓글 수: 6
Nick 2022년 8월 20일
Hi Jan,
The following table summarizes the computation time comparison over different approach and GPU enabled/disabled.
New one-step app 1 doesn't have any improvement.

댓글을 달려면 로그인하십시오.

채택된 답변

Matt J
Matt J 2022년 8월 18일
편집: Matt J 2022년 8월 18일
Because in your second formulation, there is no need to build a table of non-zero entries for the sparse matrix B. The table-building step requires sorting operations, which your second version avoids.
Also, if B has many columns, it will consume a lot of memory in proportion to the number of columns (independent of the sparsity). That is avoided as well by the second implementation.
  댓글 수: 10
Nick 2023년 1월 23일 0:49
Thank you!

댓글을 달려면 로그인하십시오.

추가 답변(1개)

Joss Knight
Joss Knight 2022년 8월 19일
The Windows Task Manager lets you track GPU utilization and memory graphically, and the utility nvidia-smi lets you do it in a terminal window.
Neither the CUDA driver nor the runtime provide access to which core is running what, although you might be able to hand-code something using NVML.
  댓글 수: 3
Nick 2022년 8월 29일
Hi Joss, thanks for your info!

댓글을 달려면 로그인하십시오.


Find more on Get Started with GPU Coder in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by