How to speed up code using GPU?

조회 수: 1 (최근 30일)
khan
khan 2015년 4월 10일
댓글: Greg Heath 2015년 4월 20일
Hi all, I have a general question, I have a neural network where the input is 80x60x13x2000.
In current setup i take one sample (80x60x13) at a time to process it for final output. Where in the first hidden layer it becomes 76x56x11x3, in second becomes 38x28x9x3, and in third becomes 34x24x7x3.
Now can any body tell me how can i use GPU at first and third layer in such a way that it becomes faster. Previously i converted all data to gpuArray, but it became worse.
Can anybody guide me how to better utilize it?
With Best Regards
khan
  댓글 수: 1
Greg Heath
Greg Heath 2015년 4월 20일
Sizes of inputs, targets and outputs are 2-dimensional. I have no idea how your description relates to 2-D matrix signals and a hidden layer net topology.
Typically,
[ I N ] = size(input)
[ O N ] = size(target)
[ O N ] = size(output)
The corresponding node topology is
I-H-O for a single hidden layer
I-H1-H2-O for a double hidden layer
Please try to explain your problem in these terms.

댓글을 달려면 로그인하십시오.

답변 (0개)

카테고리

Help CenterFile Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by