How to utilize GPU while the classifiers were running on the classification learner application?

조회 수: 17 (최근 30일)
I'm working in Deep Neural Networks in which lot of execution power is needed for computation. I used Tesla K40c and GeForce GTX 1050Ti Parallel Computing Power for features extraction from different pretrained models but at the stage of classification (which is being done by classification learner application) none of the GPU is utilizing. I have configured MATLAB 2018a with CUDA Toolkit 9.2 and cudNN library 9.2. I also tried different versions of MATLAB with different versions of CUDA Toolkit and cudNN library like MATLAB2017a with CUDA Toolkit8.0 and cudNN library version 8.0 and name a few.
My GPU is utilizing while I used matlab function "activation" for extracting features but GPU utilization has ended during the computation of all the classifiers while using classification learner app.
So, I need to utilize my GPU power while using the classification learner app to minimize the execution time during testing.
I have install all the required toolboxes like Neural Network Toolbox, Parallel Computing Toolbox and Pretrained Models.
Need help to solve this query, waiting for your response.
Thanks !
  댓글 수: 7
Junaid Lodhi
Junaid Lodhi 2018년 9월 16일
gpuArray is not a good efficient method when you have to compile a lots of simulations using different classifiers whereas on the other side of the curtains Classification Learner App provides very friendly environment to test on many good classifiers with a range of alterations in hyperparameters.
Joss Knight
Joss Knight 2018년 9월 20일
Installing the CUDA toolkit, cudnn, Visual Studio and MatConvNet has nothing whatsoever to do with MATLAB or Classification Learner. To use the GPU in MATLAB you create gpuArray objects and pass them to supported functions. If you write your own mex functions then the toolkit and cuDNN may become relevant, and if you install MatConvNet you have access to the supported tools within that third party toolbox. But of course none of that is integrated with Classification Learner.

댓글을 달려면 로그인하십시오.

채택된 답변

Bernhard Suhm
Bernhard Suhm 2018년 9월 24일
This page lists all the functions that support gpuArray, so far just a couple statistical and "classic" machine learning ones. But not all "classic" machine learning algorithms lend themselves to parallelization on a GPU.
  댓글 수: 3
Bernhard Suhm
Bernhard Suhm 2018년 9월 28일
Our GPU Coder will enable to run optimized deep learning algorithms on GPUs, and according to our benchmarks we are significantly faster than python-based deep learning frameworks. The benefit of GPUs for "classic" machine learning is less clear, that's why we haven't put the same effort into supporting classic machine learning on GPUs. That said, if your CPU is connected to a GPU, and you have PCT, the "Use Parallel" button in the classification learner will cause the model training to fan out processing to the GPU, though not leveraging the GPU specific CUDA or TensorRT acceleration libraries yet.
qusay hamad
qusay hamad 2021년 2월 14일
I agree with Mr. Junaid Lodhi
Matlab needs some improvement to make using the GPU more clear and easy. because now the only gpuArray available and that not clear when trying to write a huge program in multi-files.
I hope that in future.

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Get Started with GPU Coder에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by