I have 4 cores + CUDA supported graphics card. Is this equivalent to 5 cores?

조회 수: 1 (최근 30일)
Hello
I want to maximize my computer's resources for using parallel computing, probably using spmd. I have 4 cores and a CUDA-supported graphics card, through gpuArray. Does this mean that I can use 5 cores, or does the gpu also require a core from the start?
If this is equivalent to 5 cores, how can I use these?
Thank you

채택된 답변

Matt J
Matt J 2013년 1월 1일
편집: Matt J 2013년 1월 2일
No, the kinds of computations that a GPU can do is different from a CPU, and therefore it cannot function like an additional CPU core. The GPU actually contains many hundred cores of its own, but these cores are specialized, capable of only very simple operations. You can only use the graphics card in conjunction with gpuArray.
  댓글 수: 3
Walter Roberson
Walter Roberson 2013년 1월 3일
As far as I understand, if you were to start a gpu calculation, and then were to start smpd, then the gpu and the smpd could potentially run in parallel, with you gathering the gpu results after the smpd session finished. But if you are not using a .cu to supply a kernel that can run for a fair while by itself, then the gpu would run out things to do, as there is no "master session" running beside the smpd sessions and keeping the gpu fed.
If I recall, it is possible for the individual smpd labs to connect with the gpu, at least in the more recent versions. I do not recall the restrictions now; what I recall is that it used to be described as requiring one gpu per smpd lab, but that now there is a way to share.
What I have no idea about is whether, if you start gpuarray() going, and then start smpd sessions, whether the task of managing the gpu would get any cpu time. I am not aware of any Tesla-based graphics cards, and without Tesla the GPU remains in a mode of being limited to 30ms kernels (because the graphics subsystem needs to use the card too.)
It would not surprise me in the least if I got some of the details wrong in this; I do not have the toolbox to play with, so I've just been following along as people say interesting things. But perhaps something in what I wrote might trigger you to ask your question a different way.
Matt J
Matt J 2013년 1월 3일
As far as I understand, if you were to start a gpu calculation, and then were to start smpd, then the gpu and the smpd could potentially run in parallel, with you gathering the gpu results after the smpd session finished. But if you are not using a .cu to supply a kernel that can run for a fair while by itself, then the gpu would run out things to do, as there is no "master session" running beside the smpd sessions and keeping the gpu fed.
That seems very strange to me if you can do that. Wouldn't you need some kind of M-code version of _syncthreads() that you could call from your mfile to make sure that both the SPMD and gpu operations are simultaneously finished before proceeding?

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 GPU Computing에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by