Architecture of the neural network by nftool?

조회 수: 1 (최근 30일)
Karthik
Karthik 2012년 2월 26일
I trained a neural network of 1-2-1 configuration with a cos wave from 0:0.1:2*pi, assigning the nftool option for number of hidden neurons a value of 2. I saved the network into workspace as net1:
After that I changed the weights as:
net1.IW{1,1}=[1 5]';
net1.b{1,1}=[2 2]';
net1.LW{2,1}=[4 8];
net1.b{2,1}=7;
I didn't make any other changes to the network.
Then I executed the following code to check whether I had rightly understood the architecture of the NN:
sum(tansig(0.*net1.IW{1,1}'+net1.b{1,1}').*net1.LW{2,1})+net1.b{2,1}
sim(net1,0);
However, the two lines gave me different results.
ans =
18.5683
ans =
2.0855
Shouldn't the results of the last two lines be the same? Is something wrong with the toolbox or have I misunderstood the architecture of the generated network?

채택된 답변

Mark Hudson Beale
Mark Hudson Beale 2012년 3월 1일
You have correctly understood how the main part of the neural network works, however, the inputs and outputs of the neural network are also doing some processing.
You can see the processing functions and settings in the processFcns and processSettings fields of inputs and the second layer's outputs:
net.inputs{1}.processFcns
net.inputs{1}.processSettings
net.outputs{2}.processFcns
net.outputs{2}.processSettings
The processSettings were automatically set upon first training the network. For instance the processing MAPMINMAX's settings save the range of inputs X so it can consistently map inputs into the range [-1 1].
Type "help nnprocess" to see a list of processing functions you can assign to a network before training if you like, beyond the ones your network might have.
If the first processing function for inputs is MAPMINMAX then you can process the inputs as follows:
x = 0; for i=1:numel(net.inputs{1}.processFcns) x = feval(net.inputs{1}.processFcns{i},x,net.inputs{i}.processSettings); end
At this point you can put X into your network equation above to calculate Y. Here is another notation for that calculation. BSXFUN makes it easy to add bias vectors to weighted matrices.
y = bsxfun(@plus,net.LW{2,1}*bsxfun(@plus,tansig(net1.IW{1,1}*x,net.b{1}),net.b{2})
Then reverse process the output Y:
for i=numel(net.outputs{2}.processFcns):-1:1 y = feval(net.outputs{2}.processFcns{i},'reverse',y,net.outputs{2}.processSettings{i}); end
At that point Y should be the same as you got for SIM(NET1,0).

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Sequence and Numeric Feature Data Workflows에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by