필터 지우기
필터 지우기

Using deployed neural network

조회 수: 5 (최근 30일)
Gondos Gellert
Gondos Gellert 2015년 5월 29일
댓글: Gondos Gellert 2015년 6월 8일
I have two variables, one of them consists a series (x) from 1 to 365 (days) and the other consists real numbers (y). I use nonlinear autoregressive neural network with external input (narx) and I would like to predict the y value in the future, so if the x is bigger then 365. I deployed the neural network, but I don't know how to use it.
I tried in this way, but the return values has no connections with the real data. How should I use this function?
X = [0 0];
X = num2cell(X);
X = transpose(X);
for i = 1:365
Xi = [i 0];
Xi = num2cell(Xi);
Xi = transpose(Xi);
[Y,Xf,Af] = ann_test2_generated(X, Xi);
Y
end
Here is my deployed function:
function [Y,Xf,Af] = myNeuralNetworkFunction(X,Xi,~)
%MYNEURALNETWORKFUNCTION neural network simulation function.
%
% Generated by Neural Network Toolbox function genFunction, 29-May-2015 12:42:30.
%
% [Y,Xf,Af] = myNeuralNetworkFunction(X,Xi,~) takes these arguments:
%
% X = 2xTS cell, 2 inputs over TS timsteps
% Each X{1,ts} = 1xQ matrix, input #1 at timestep ts.
% Each X{2,ts} = 1xQ matrix, input #2 at timestep ts.
%
% Xi = 2x1 cell 2, initial 1 input delay states.
% Each Xi{1,ts} = 1xQ matrix, initial states for input #1.
% Each Xi{2,ts} = 1xQ matrix, initial states for input #2.
%
% Ai = 2x0 cell 2, initial 1 layer delay states.
% Each Ai{1,ts} = 20xQ matrix, initial states for layer #1.
% Each Ai{2,ts} = 1xQ matrix, initial states for layer #2.
%
% and returns:
% Y = 1xTS cell of 2 outputs over TS timesteps.
% Each Y{1,ts} = 1xQ matrix, output #1 at timestep ts.
%
% Xf = 2x1 cell 2, final 1 input delay states.
% Each Xf{1,ts} = 1xQ matrix, final states for input #1.
% Each Xf{2,ts} = 1xQ matrix, final states for input #2.
%
% Af = 2x0 cell 2, final 0 layer delay states.
% Each Af{1ts} = 20xQ matrix, final states for layer #1.
% Each Af{2ts} = 1xQ matrix, final states for layer #2.
%
% where Q is number of samples (or series) and TS is the number of timesteps.
%#ok<*RPMT0>
% ===== NEURAL NETWORK CONSTANTS =====
% Input 1
x1_step1_xoffset = 1;
x1_step1_gain = 0.00549450549450549;
x1_step1_ymin = -1;
% Input 2
x2_step1_xoffset = -21480;
x2_step1_gain = 2.97129138266073e-06;
x2_step1_ymin = -1;
% Layer 1
b1 = [-9.3839221387106413;6.3673650679944052;10.198031952946559;-5.6065382458804498;1.4990163108572789;5.1188396872625352;-0.90118631104219182;-0.30494277554968652;-1.697067083760901;-1.6272737005091373;6.3305489926853982;3.0484646453328987;-1.6579644194560408;4.3743984158143681;7.539389327001226;-1.3832313531170317;6.6047117902635213;-10.14216902642435;5.7011344366241081;7.0957086293236227];
IW1_1 = [2.0258887755500834;-4.2028504717535542;-8.5464197976148668;2.5080209301563365;-3.9463681010808207;-2.5370763473999776;1.4061148692807672;1.5181842968841124;-3.1441033960841618;-7.8548645123947756;0.0091129562548408864;3.3948767243040034;-1.7058048527967125;5.3377573195333579;2.3789548426459328;3.3080234966837332;5.0430922734647279;-10.570582147844345;2.1165050263862364;4.0310619147913478];
IW1_2 = [-3.4365907728390428;11.567104200163792;-0.1263365421914415;6.3186864593347876;8.2894556267593753;12.219941806571244;-0.8716215531914453;4.94787248538118;4.2165948351438223;-8.525057555843965;13.111856679144475;-1.1950749050600189;0.23506905319499458;-9.9752226612737704;-10.615461396517761;-4.8799419207110208;-1.6176406051352417;-0.6096358833405241;-6.0675879268844328;-4.3083059524580802];
% Layer 2
b2 = 2.5640990680145257;
LW2_1 = [0.6545727832513627 -0.18920916505217597 -1.2655928487990731 -0.58900624069322183 0.392454543501077 0.11816796602420446 -0.65362987999478428 0.25904685510735481 0.11936289461658904 0.13118489685451218 -0.17113982960225663 -1.068411519863667 -1.5868322036805684 0.022550373062452611 0.60270331566362767 0.58077892473389758 -0.47028353992483191 -0.20473592291391843 -2.7571898591909338 0.93992224806624713];
% Output 1
y1_step1_ymin = -1;
y1_step1_gain = 2.97129138266073e-06;
y1_step1_xoffset = -21480;
% ===== SIMULATION ========
% Format Input Arguments
isCellX = iscell(X);
if ~isCellX, X = {X}; end;
if (nargin < 2), error('Initial input states Xi argument needed.'); end
% Dimensions
TS = size(X,2); % timesteps
if ~isempty(X)
Q = size(X{1},2); % samples/series
elseif ~isempty(Xi)
Q = size(Xi{1},2);
else
Q = 0;
end
% Input 1 Delay States
Xd1 = cell(1,2);
for ts=1:1
Xd1{ts} = mapminmax_apply(Xi{1,ts},x1_step1_gain,x1_step1_xoffset,x1_step1_ymin);
end
% Input 2 Delay States
Xd2 = cell(1,2);
for ts=1:1
Xd2{ts} = mapminmax_apply(Xi{2,ts},x2_step1_gain,x2_step1_xoffset,x2_step1_ymin);
end
% Allocate Outputs
Y = cell(1,TS);
% Time loop
for ts=1:TS
% Rotating delay state position
xdts = mod(ts+0,2)+1;
% Input 1
Xd1{xdts} = mapminmax_apply(X{1,ts},x1_step1_gain,x1_step1_xoffset,x1_step1_ymin);
% Input 2
Xd2{xdts} = mapminmax_apply(X{2,ts},x2_step1_gain,x2_step1_xoffset,x2_step1_ymin);
% Layer 1
tapdelay1 = cat(1,Xd1{mod(xdts-1-1,2)+1});
tapdelay2 = cat(1,Xd2{mod(xdts-1-1,2)+1});
a1 = tansig_apply(repmat(b1,1,Q) + IW1_1*tapdelay1 + IW1_2*tapdelay2);
% Layer 2
a2 = repmat(b2,1,Q) + LW2_1*a1;
% Output 1
Y{1,ts} = mapminmax_reverse(a2,y1_step1_gain,y1_step1_xoffset,y1_step1_ymin);
end
% Final Delay States
finalxts = TS+(1: 1);
xits = finalxts(finalxts<=1);
xts = finalxts(finalxts>1)-1;
Xf = [Xi(:,xits) X(:,xts)];
Af = cell(2,0);
% Format Output Arguments
if ~isCellX, Y = cell2mat(Y); end
end
% ===== MODULE FUNCTIONS ========
% Map Minimum and Maximum Input Processing Function
function y = mapminmax_apply(x,settings_gain,settings_xoffset,settings_ymin)
y = bsxfun(@minus,x,settings_xoffset);
y = bsxfun(@times,y,settings_gain);
y = bsxfun(@plus,y,settings_ymin);
end
% Sigmoid Symmetric Transfer Function
function a = tansig_apply(n)
a = 2 ./ (1 + exp(-2*n)) - 1;
end
% Map Minimum and Maximum Output Reverse-Processing Function
function x = mapminmax_reverse(y,settings_gain,settings_xoffset,settings_ymin)
x = bsxfun(@minus,y,settings_ymin);
x = bsxfun(@rdivide,x,settings_gain);
x = bsxfun(@plus,x,settings_xoffset);
end

채택된 답변

Greg Heath
Greg Heath 2015년 6월 6일
1. The significant correlation threshold is the 95% confidence threshold for the cumulative probability function of the absolute value of the autocorrelation of Gaussian noise.
2. All values of the absolute value of the autocorrelation of the original series that exceeds the threshold, are significant.
3. The corresponding lags are significant lags.
4. Choose (by trial and error) the smallest subset of the smallest lags that will yield a satisfactory result.
  댓글 수: 1
Gondos Gellert
Gondos Gellert 2015년 6월 8일
A lot of people on this forum has problem with calculating the feedback delay and the hidden layer size. Maybe it would be easier for you to make a working example what we can see.

댓글을 달려면 로그인하십시오.

추가 답변 (1개)

Greg Heath
Greg Heath 2015년 5월 30일
편집: Greg Heath 2015년 5월 30일
What physical entity does y represent?
I doubt if x and y are significantly correlated. Check with nncorr or, better, a correlation function from fft or another toolbox.
Therefore, use NARNET and search for my NARNET tutorial on multistep prediction in the NEWSGROUP.
Hope this helps.
Thank you for formally accepting my answer
Greg
  댓글 수: 2
Gondos Gellert
Gondos Gellert 2015년 6월 5일
I started to use NARNET, but I have a problem to calculate proper feedback delays and hidden neurons.
My dataset is a spending transaction history, so it's periodic to 30 days.
I read your posts but I didn't find exact answer to that, how can I calculate the proper feedback delay and number of neurons.
Could you help me in this topic or write it down exactly with program code?
Thanky you.
Gondos Gellert
Gondos Gellert 2015년 6월 5일
편집: Gondos Gellert 2015년 6월 5일
The output of this code:
ZT=zscore(target);
autocorrT = nncorr(ZT,ZT,364,'biased')
plot(autocorrT)
is this plot:
And without using zscore(...):
And this code gives Hub = 121
[I ,N]=size(1:365);
[O ,N]=size(target);
Neq=N*O;
Hub=floor((N-1)*O/(I+O+1))
So If I understand you well, the hiddenLayerSize will be 121. And the FD is 1, because at that point are the most significant correlation. But then the closed loop narnet results are:
And this is horrible. What's the problem?

댓글을 달려면 로그인하십시오.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by