Neural Network simulation for the output value is different from the output value obtained by using calculations, why?

조회 수: 1 (최근 30일)
Input data - 3 x 150 (3 inputs , 150 data), Target - 1 x 150 ( 1x50 each for 0, 0.5 and 1.0 respectively), Sample = [20 50 60]; newff(...); % create custom network, [net,tr] = train(net,Inputs,Targets); % train the network, Output = net(Sample') %Predict the output value for the sample (input data). The training stopped at about 10 iterations.
But the output value obtained from Output = net(Sample') is totally different from the output value obtained by using the following calculations,
For example - Calculate (back-propagate) hidden layer and output layer errors, δA = outA (1 – outA) (δαWA + δβWA) Change hidden layer weights, WA_new = WA_old + ηδA inλ Change output layer weights, WB_old = WB_old + ηδα outA and so on.
By using Output = net(Sample') , it was 0.975 . However, from calculations, the value was 0.777 for 1 iteration . The target for this sample is 1.0.
Does anyone know about this? Thanks.

채택된 답변

Greg Heath
Greg Heath 2015년 7월 4일
Insufficent information
1. What do the 3 target values represent?
2. Your analytic calculations make no sense
a. Samples should be 3-D columns
b. yn = B2 + LW*tanh(B1 + IW*xn)
for normalized inputs and targets. See
http://www.mathworks.com/matlabcentral/newsreader/view_thread/341631#936181
Hope this helps.
Thank you for formally accepting my answer
Greg
  댓글 수: 2
mun1013
mun1013 2015년 7월 7일
편집: mun1013 2015년 7월 7일
1. Yes, the sample should be Sample = [20;50;60], sorry for the typo error in this question. The target of this sample is 1.0.
2. I don't understand this equation - yn = B2 + LW*tanh(B1 + IW*xn)?
3. For the targets, 0 represents 3x50 of the input data, while 0.5 represents the following 3x50 of the input data and 1.0 represents the last 3x50 of the input data.
4. There are two hidden layers and logsig is used for this training. I have normalized both input data and sample.
I was comparing the results using Output = net(Sample') and analytic calculation, but both give me different values - 0.9916 and 0.0590 respectively, why? The target of this sample is 1.0. the following equations that I used are below:
Using logsig for each layer,
Ij = IW*Input_data;
Q = 1./(1+exp(-Ij));
Ik = LW*Q;
Output = 1./(1+exp(-Ik));
Thank you.
Greg Heath
Greg Heath 2015년 7월 8일
2. I don't understand this equation - yn = B2 + LW*tanh(B1 + IW*xn)?
It is the default normalized output assuming defaults of FITNET or FEEDFORWARDNET and the default normalized input. tanh is the same as the default tansig. So the only thing you have to do is
a. Normalize input, x, and target, t, to [ -1 1] (xn and tn) using MAPMINMAX b. Obtain the weights B2, LW, B1, and IW from net.b{2}, net.LW,net.b{1} and net.LW c. Plug into the equation for yn d. Unnormalize yn using tsettings from MAPMINMAX e. It as been known for decades that if you normalize to [-1,1] (mapminmax) or mean=0,std=1 (mapstd or zscore) and use tanh(tansig) for hidden layers and a constant output layer function (purelin) that training speed is optimum. Both the specialized regression function FITNETand the general FEEDFORWARDNET use these defaults.
Using logsig for each layer,
Ij = IW*Input_data;
Q = 1./(1+exp(-Ij));
Ik = LW*Q;
Output = 1./(1+exp(-Ik));
Did you account for biases and normalization? 

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Sequence and Numeric Feature Data Workflows에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by