neural network training in a loop

조회 수: 1 (최근 30일)
Eugene Buyakin
Eugene Buyakin 2016년 7월 3일
편집: M J 2020년 10월 13일
I am trying to train neural network in several iterations using FOR loop to set up the number of training epochs (I need that as a preparation for some experiment). However, the results of such training are different from the results of standard training process with the same number of epochs. I suspect it's due to some training settings automatically adjusted at each iteration, but I can't find which exactly. I'd appreciate any help\clues. Here is the code to illustrate the problem:
p = rand(2,10);
t = 2*p(1,:) + p(2,:) + 3;
unet = feedforwardnet(2);
unet.divideFcn = '';
unet = configure(unet,p,t);
unet.trainParam.ShowWindow = 0;
mnet = unet;
mnet.trainParam.epochs = 1;
for i = 1:5
mnet = train(mnet,p,t);
end
anet = unet;
anet.trainParam.epochs = 5;
anet = train(anet,p,t);
I expected anet (trained using standard 5 epochs training) and mnet (trained 5 times using 1 epoch training) would be the same (have the same weights in IW\LW\b), but that's not the case. Thanks, Eugene

답변 (1개)

Greg Heath
Greg Heath 2016년 7월 4일
1.The number of epochs to a satisfactory result depends on the random initial weights and random datadivision. Therefore, to be able to reproduce previous results, ALWAYS intialize the RNG to an initial state of your choice.
2. Feedforwardnet is a generic net automatically called by
a. FITNET specialized for regession and curvefitting
b. PATTERNNET specialized for classification and pattern
recognition
3. Typically, it is better to use the specialized vesions AND the documentation examples used in the help and doc documentation
help fitnet
doc fitnet
Additional examples can be found using the command
help nndatasets
4. ALWAYS try to plot and familiarize yourself with the data.
5. For smooth plots it will take at least NLE hidden nodes where
NLE = Number of Local Extrema
% Initialize the RNG
p = rand(2,10);
t = 2*p(1,:) + p(2,:) + 3;
% Plot t vs p
unet.divideFcn = ''; % Equivalent to 'dividetrain'
unet = feedforwardnet(2);
% What makes you think 2 hidden nodes is appropriate??
unet = configure(unet,p,t);
% An empty net will be automatically configured by TRAIN However, for a nonempty net, TRAIN will continue from the existing weights. Therefore, it is only necessary to use configure when removing existing weights and reinitializing settings
unet.trainParam.ShowWindow = 0;
mnet = unet;
mnet.trainParam.epochs = 1;
for i = 1:5
mnet = train(mnet,p,t);
end
%This is not doing what you think because several training parameters, e.g., mu are automatically reinitialized every time train is called
anet = unet;
anet.trainParam.epochs = 5;
anet = train(anet,p,t);
Hope this helps.
Thank you for formally accepting my answer
Greg
  댓글 수: 1
M J
M J 2020년 10월 13일
편집: M J 2020년 10월 13일
Dear Greg, sorry for the slightly unrelated question, but I was simply wondering if there was a solution to this specific question and if you could help, if possible of course.

댓글을 달려면 로그인하십시오.

카테고리

Help CenterFile Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by