Contained below is my code for a Neural Network I have designed. The network training runs fine on a single CPU but it is very slow, is it possible to use a parallel set up? I have made what i think are the required modifications but i get an error (also contained below), what is meant by a recurrent network and is there any way to overcome this error?
Error using trainNetwork (line 150)
Parallel training of recurrent networks is not supported. 'ExecutionEnvironment' value in trainingOptions function must be 'auto', 'gpu', or 'cpu'.
Error in Network2 (line 49)
network2 = trainNetwork(TrainingInputData,TrainingOutputData,layers,options)
Caused by:
Error using nnet.internal.cnn.assembler.setupExecutionEnvironment (line 17)
Parallel training of recurrent networks is not supported. 'ExecutionEnvironment' value in trainingOptions function must be 'auto', 'gpu', or
'cpu'.
clc
clear
parpool
[outputdata, inputdata,specifications] = dataprep();
TrainingOutputData = outputdata;
TrainingInputData = inputdata;
TestingOutputData = cell(1,1);
TestingInputData = cell(1,1);
TestingOutputData{1,1} = outputdata{4,1};
TestingInputData{1,1} = inputdata{4,1};
TrainingOutputData(4,:)=[];
TrainingInputData(4,:)=[];
maxEpochs = 10;
miniBatchSize = 1;
Neurons = 150000;
numResponses = size(TrainingOutputData{1},1);
featureDimension = size(TrainingInputData{1},1);
layers = [ ...
sequenceInputLayer(featureDimension)
fullyConnectedLayer(Neurons)
fullyConnectedLayer(numResponses)
regressionLayer];
options = trainingOptions('sgdm', ...
'MaxEpochs',maxEpochs, ...
'MiniBatchSize',miniBatchSize, ...
'InitialLearnRate',0.01, ...
'LearnRateSchedule','piecewise',...
'LearnRateDropPeriod',3,...
'LearnRateDropFactor',0.1,...
'GradientThreshold',1, ...
'Shuffle','never', ...
'Plots','training-progress',...
'ValidationData',{TestingInputData,TestingOutputData},...
'ValidationFrequency', 1,...
'Verbose', 1,...
'VerboseFrequency', 1,...
'ExecutionEnvironment', 'parallel');
network2 = trainNetwork(TrainingInputData,TrainingOutputData,layers,options)

 채택된 답변

Joss Knight
Joss Knight 2019년 2월 15일

0 개 추천

No, you can't use parallel training for a sequence network, sorry.

추가 답변 (0개)

카테고리

도움말 센터File Exchange에서 Parallel Computing Fundamentals에 대해 자세히 알아보기

질문:

2019년 2월 14일

답변:

2019년 2월 15일

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by