Neural Net: Saving Trained Net
이전 댓글 표시
I will explain my problem. My code is given below.
% ********************* Plant ************************
s = 5000; u = zeros(1, s);
y = zeros(1, s); yhat = zeros(1, s);
for k=1:s
u(k)= unifrnd(-1,1);
if (k==1),
y(k)=0;
elseif (k==2),
y(k)=0.3*y(k-1)+u(k)+0.3*u(k);
else
y(k)=0.3*y(k-1)+0.6*y(k-2)+u(k)+0.3*u(k);
end
end
% **************** NN Modelling ********************* % Creating Neural Net
[yn,ys] = mapminmax(y);
net = newcf( u, yn, [20 10 ], 'tansig', 'tansig',...
'purelin'},'trainscg');
% Training Neural Net
net.trainParam.lr = 0.05; net.trainParam.lr_inc =1.05;
net.trainParam.lr_dec = 0.7; net.trainParam.hide = 50;
net.trainParam.mc = 0.9; net.trainParam.epochs = s;
net.trainParam.goal = 1e-5; net.trainParam.max_fail = s;
net.trainParam.time = 3*3600; trainInd = 1:1:s;
valInd = 2:50:s; testInd = 3:50:s;
[trainu,valu,testu] = divideind,u,trainInd,valInd,testInd);
[trainy,valy,testy] = divideind(yn,trainInd,valInd,testInd);
net = init(net); net = train(net,u,yn);
The actual problem is different. The above program is written in an m-file and takes about 15 minutes to converge. Can you go through the program and tell me if I am correct in programming? I wish to use this net for another set of inputs. For this, I need to access the trained net. How the trained net is saved external to this m file and how to access it? Please help. Thanks in advance.
댓글 수: 1
Walter Roberson
2012년 10월 25일
If you want someone to go through your code, I recommend that you space it out to make it readable instead of packing as much as possible on one line.
채택된 답변
추가 답변 (2개)
Greg Heath
2012년 10월 26일
1 개 추천
If you do not use NARXNET you will have to add additional dimensions to the input matrix to accomodate the target delays. After training, however, you will not have a net that can be readily implemented on unseen data because it will not accomodate feedack delays.
In your case you already know, from the generating equation, that the input delay is 0 and the feedback delays are 1:2. However, you should still
1. Remove the negative delay components autocorrT(1:N-1) = [], etc
2. Plot the resulting nonnegative delay functions autocorrT and crosscorrXT vs nonnegative delays lags = 0:N-1
3. Obtain the indices for significant correlations using the function FIND.
4. Ignore autocorrT(1) and choose a reasonable number of significant positive delays.
5. The smallest practical number of hidden nodes can be obtained by trial and error.
6. Overwrite the correlation-destroying 'dividerand' with an alternative.
7. Assuming T = zscore(y). The post-training values for the nontraining coefficients of determination (see wikipedia) are
R2val = 1 - tr.vperf(end)
R2tst = 1 = tr.tperf(end)
These mean more to me than correlation coefficients that ignore vertical shifts of the linear output/target line fit.
You may find it useful to search the NEWSGROUP and ANSWERS using the searchwords narx and narxnet.
Hope this helps.
Greg
Greg Heath
2012년 10월 27일
1 개 추천
%************************************** NN Modelling ********************************
% Creating Neural Net
% [yn,ys] = mapminmax(y);
Better to use "t" for targets and "y" for outputs
Why not transform u? TANSIG works better with bipolar inputs.
% net = newcf(u,yn,[20 10] ,{'tansig','tansig','purelin'},'trainscg');
Probably do not need 2 hidden layers or that many hidden nodes.
% % Training Neural Net
% net.trainParam.lr = 0.05;
% net.trainParam.lr_inc = 1.05;
% net.trainParam.lr_dec = 0.7;
% net.trainParam.hide = 50;
% net.trainParam.mc = 0.9;
The above parameters should not be used with TRAINSCG.
help trainscg
doc trainscg
Why couldn't you just accept the net default parameters??
% net.trainParam.goal = 1e-5;
MSEgoal = 0.01*var(yn,1) = 0.01 is usually sufficient
% net.trainParam.epochs = s;
Why replace the default = 1000 with 5000 ??
% net.trainParam.max_fail = s;
Why replace the default 6 with 5000 ??
% net.trainParam.time = 3*3600;
Why?
% trainInd = 1:1:s;
% valInd = 2:50:s;
% testInd = 3:50:s;
No. They should have no points in common.
% [trainu,valu,testu] = divideind(u,trainInd,valInd,testInd);
% [trainy,valy,testy] = divideind(yn,trainInd,valInd,testInd);
No. Dividing u is sufficient (y is automatically paired with u point-by-point)
% net = init(net);
Unnecessary. NEWCF is self-initializing.
%net = train(net,u,yn);
[ net tr Y E ]= train(net,u,yn);
tr contains all the important info
Hope this helps.
Greg
P.S. You have probably guessed by now that the reason why your program ran about 20 times longer than it should have is because of the defaults you changed.
카테고리
도움말 센터 및 File Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기
제품
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!