Community Profile

photo

Joss Knight


Last seen: 1일 전

MathWorks

370 2013 이후 총 참여 횟수

Although I cannot be contacted directly, if you would like to ask me a question all you have to do is mention "GPU" somewhere in your MATLAB Answers question.

Joss Knight's 배지

  • 36 Month Streak
  • Knowledgeable Level 4
  • Pro
  • Revival Level 2
  • First Answer

세부 정보 보기...

참여 게시물
보기 기준

답변 있음
Arrayfun GPU in "Game of Life" works slower than CPU
Check out this Answer. The arrayfun version is rather dependent on good memory performance since the kernel is accessing global ...

6일 전 | 0

답변 있음
Deep Learning - Distributed GPU Memory
No, there is nothing like what you are after, to distribute the weights of a fully connected layer across multiple GPUs. You cou...

6일 전 | 0

답변 있음
Multiple GPUs perform slower than single GPU to train a semantic segmentation network
On Windows, due to GPU communication issues on that platform, it is difficult to get any benefit from multi-GPU training. This w...

25일 전 | 0

| 수락됨

답변 있음
The utilization of GPU is low in deep learning
Try following some of the advice in the following MATLAB Answer: https://uk.mathworks.com/matlabcentral/answers/463367-gpu-utili...

약 1달 전 | 0

답변 있음
Assigning gpuArrays to different graphics cards
There is no way to do what you ask. Selecting a GPU is the only way to move data there, and selecting a GPU resets all GPU data....

약 1달 전 | 0

| 수락됨

답변 있음
GPU out of memory
In your example code you are using the default mini-batch size of 128. Reduce the MiniBatchSize training option until you stop g...

약 2달 전 | 0

답변 있음
incorrect memory copy when using property size validator
I was able to reproduce this in R2018b but not in R2019a or R2019b. It looks like property validators used to trigger a deep cop...

3달 전 | 0

| 수락됨

답변 있음
Solution of large sparse matrix systems using GPU MLDIVIDE
The general advice is that Sparse MLDIVIDE may be convenient, but it is 'usually' slower than use of an iterative solver with an...

3달 전 | 0

| 수락됨

답변 있음
Deep Learning: Training Network with "parallel" option using only CPUs
Even with a weak graphics card you will usually see better performance than on multiple CPUs. However, to try it out, after you ...

4달 전 | 0

| 수락됨

답변 있음
How to use Levenberg-Marquardt backprop with GPU?
This isn't supported out of the box yet. You could convert your network to use dlarray and train it with a custom training loop....

4달 전 | 0

답변 있음
hardware requierments for MATLAB
Partial answer GPU Computing. You can't have a MATLAB without CPU Computing so obviously both is better. No Mostly using the ...

4달 전 | 1

답변 있음
Which Visual Studio 2019 package should I install to work with CUDA?
To accelerate your MATLAB code with an NVIDIA GPU, you do not need to install a C++ Compiler.

4달 전 | 0

답변 있음
Why would the file size of a deep learning gradient become much bigger after saving as a .mat file?
The difference is that whos is unable to account for the fact that the data is all stored on the GPU, and is only showing CPU me...

4달 전 | 0

| 수락됨

답변 있음
Training a Variational Autoencoder (VAE) on sine waves
It looks like your input data size is wrong. Your formatting says that the 4th dimension is the batch dimension, but actually it...

5달 전 | 0

| 수락됨

답변 있음
Is lhsdesign (latin hypercube sampling) supported by gpuArray?
It is not supported. You can tell whether or not a function supports gpuArray, more reliably than from the list of gpuArray meth...

5달 전 | 1

| 수락됨

답변 있음
DOES GEFORCE GTX1080 GPU WORKS WELL FOR DEEP LEARNING TRAINING??
Yes.

5달 전 | 1

| 수락됨

답변 있음
Feed data into Neural Networks file-by-file
Datastores are designed for precisely this purpose. It may be that you're after an imageDatastore processed by a transform.

5달 전 | 0

답변 있음
Gather cell array from GPU to CPU
A_cpu = cellfun(@gather, A_gpu, 'UniformOutput', false);

5달 전 | 1

| 수락됨

답변 있음
Error in matlab included deep learning example
There is a bug in this Example which will be rectified. Thanks for reporting. To workaround, initialize the loss variable in the...

6달 전 | 2

| 수락됨

답변 있음
movsum slower than conv2 in GPU
One might theorize, perhaps, that movsum literally uses the same kernels as conv2, but first has to construct the filter of ones...

6달 전 | 2

답변 있음
Does MATLAB require dedicated graphic card
If you want hardware-rendered plots and 3-D visualizations, you need a GPU of some kind. Without it, these things will be a bit ...

7달 전 | 0

답변 있음
Deep learning with a GPU that supports fp16
You can take advantage of FP16 when generating code for prediction on a deep neural network. Follow the pattern of the Deep Lear...

7달 전 | 1

| 수락됨

답변 있음
Select a GPU to be used by a function running in parallel(parfeval)
I'd have to know what kind of postprocessing you're doing - please post some code. On the face of it, the answer is simply to us...

8달 전 | 0

| 수락됨

답변 있음
'radix_sort: failed to get memory buffer' when executing accumarray on gpuArrays of certain size
There is an issue in an NVIDIA library that is not functioning correctly when memory is limited. This is fixed in CUDA 10 / MATL...

8달 전 | 0

답변 있음
Why does gpuArray() error out?
Make sure you have read this: https://uk.mathworks.com/matlabcentral/answers/442324-can-i-use-matlab-with-an-nvidia-gpu-on-macos...

8달 전 | 1

답변 있음
GPU recommendation for Deep Learning and AI
The Tesla V100 is a passively cooled device only suitable for servers. Is that available to you? The Quadro card you indicate is...

8달 전 | 0

답변 있음
.CU Files for MATLAB
Hi Oli. You don't run nvcc in MATLAB, since it isn't a MATLAB feature. You run it at a Windows Command Prompt (or Powershell). U...

8달 전 | 0

답변 있음
Error using nnet.internal.cnngpu.convolveBiasReluForward2D
If you want to go back to using your CPU, add the 'ExecutionEnvironment' 'cpu' to your call to semanticseg. C = semanticseg(Img...

8달 전 | 0

답변 있음
Fast 2D distance calculation
pdist2 is the usual way to do this, if you have Statistics and Machine Learning Toolbox.

8달 전 | 0

답변 있음
Unexpected speed decrease of 2D Fourier Transform on GPU when iFFTed
I modified your code inserting wait(gpuDevice) before each tic and toc and got a much more sensible graph: The GPU runs async...

10달 전 | 0

| 수락됨

Load more