Convolutional neural networks: What is the best practice training approach using graphics cards?

조회 수: 1 (최근 30일)
Training a convolutional neural network (CNN) for image classification, I successfully used the trainNetwork function employing 4 CPU cores. However, the process takes quite a lot of time (hours) and must be accelerated, e.g by using a graphics card.
Currently, I pass a tbl to trainNetwork containing the image paths and labels. I suppose that images are read from the disk and then sequentially processed by the function. This might work for CPU based processing system to some extent. However, using a GPU, I assume that this approach will significantly slow down the training process due to a number of GPU accesses and related delays. Is it e.g. possible to transfer the training data batch-wise to the graphics card or is this automatically done using the parallel processing toolbox? How do I have to adapt my code in this case? It would be great to have a minimalistic code snippet.
Thank you! Best, Stephan
P.S.: I should mention that I cannot use an imageDatastore, since this datatype apparently does not work for regression CNNs which I use.

채택된 답변

Joss Knight
Joss Knight 2017년 10월 27일
편집: Joss Knight 2017년 10월 27일
You needn't worry too much about the efficiency of file i/o. Even with a file-path table, data is prefetched in the background during training. Your only concern is setting the mini-batch size appropriately for maximum efficiency.

추가 답변 (1개)

Corey Silva
Corey Silva 2017년 10월 24일
You can use the "trainingOptions" function to tell the "trainNetwork" function to use the GPU.
For example, if we already have "trainDigitData" and "layers" defined, then the following example does this:
>> options = trainingOptions('sgdm','ExecutionEnvironment','gpu');
>> convnet = trainNetwork(trainDigitData,layers,options);
  댓글 수: 1
quincy
quincy 2017년 10월 25일
편집: quincy 2017년 10월 25일
Dear Corey, thanks for your answer! I know about this option and it should work using an image database as data source. As far as I understood, data handling is then optimized such that the number of graphics card accesses are minimal. Using a CNN in regression mode, the targets need to be the regression values. This, however, seems not to be supported by the image database. Here, the workaround is to use a table. The question is if tables also work optimal with graphics cards or if there is another best practice.
Best, Stephan

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Image Data Workflows에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by