how do i construct neural network

조회 수: 1 (최근 30일)
Thirunavukkarasu
Thirunavukkarasu 2014년 8월 26일
댓글: Greg Heath 2014년 8월 30일
how do i construct neural network that has two layers, four weights linking the input-to-hidden layer with no biases, and two weights linking the hidden-to-output layer with a 1 bias at the output neuron

채택된 답변

Greg Heath
Greg Heath 2014년 8월 28일
What you are asking doesn't make much sense. For a standard universal approximation I-H-O net the number of weights are
Nw = (I+1)*H+(H+1)*O
where the 1s correspond to biases. If either bias node is removed, the net is no longer a universal approximator.
It looks like you want
Nw = I*H+(H+1)*O
with
I*H = 4
H*O = 2
O = 1
Consequently, O=1, H=2, I = 2 and
size(input) = [ 2 N ]
size(target) = [ 1 N ]
Since you don't specify regregression or classification, lets try classification with the exclusive or function. Typically, I desire the explained target variance = coefficient of variation, = Rsquared (See Wikipeia) to be >= 0.99.
clear all, clc
x = [1 1 -1 -1 ; -1 1 1 -1 ];
t = [ 0 1 0 1 ];
MSE00 = var(t) % 0.33333 Reference MSE
net = patternnet(2);
net.biasConnect = [ 0;1]; % No input bias
net.divideFcn = 'dividetrain'; % No validation or test subsets
rng(0)
for i = 1:10
net = configure(net,x,t);
[net tr y(i,:) e] = train(net,x,t);
R2(i,:) = 1-mse(e)/MSE00;
end
y = y % y = 0.5*ones(10,4)
R2 = R2 % R2 = 0.25*ones(10,1), (far from 0.99!!!)
Obviously, can get negligible error with the input bias.
Hope this helps.
Thank you for formally accepting my answer
Greg
  댓글 수: 2
Thirunavukkarasu
Thirunavukkarasu 2014년 8월 29일
Greg Heath
Greg Heath 2014년 8월 30일
You have designated 3 node layers which makes intuitive sense.
HOWEVER, THE CONVENTION IS TO DESIGNATE WEIGHT LAYERS OF WHICH YOU HAVE 2:
A hidden weight layer and an output weight layer.
Therefore, this is considered a 2-layer net!
I don't like this convention. However, it is what it is.
The nodes you have labeled as being in the Input Layer are considered FAN-IN-UNITS. Whereas the nodes in the other two layers are considered NEURONS.
Nevertheless, without a bias weight for the first weight layer (as I have illustrated with the XOR) you do not have a universal approximator.
Do you have any questions?
Greg

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by