Back propagation algorithm of Neural Network : XOR training

조회 수: 110 (최근 30일)
Ashikur
Ashikur 2012년 1월 22일
댓글: sultana albanhar 2022년 4월 15일
c=0;
wih = .1*ones(nh,ni+1);
who = .1*ones(no,nh+1);
while(c<3000)
c=c+1;
for i = 1:length(x(1,:))
for j = 1:nh
netj(j) = wih(j,1:end-1)*double(x(:,i))+wih(j,end)*1;
outj(j) = 1./(1+exp(-1*netj(j)));
end
% hidden to output layer
for k = 1:no
netk(k) = who(k,1:end-1)*outj'+who(k,end)*1;
outk(k) = 1./(1+exp(-1*netk(k)));
delk(k) = outk(k)*(1-outk(k))*(t(k,i)-outk(k));
end
% back proagation for j = 1:nh s=0; for k = 1:no s = s+who(k,j)*delk(k); end
delj(j) = outj(j)*(1-outj(j))*s;
s=0;
end
for k = 1:no
for l = 1:nh
who(k,l)=who(k,l)+.5*delk(k)*outj(l);
end
who(k,l+1)=who(k,l+1)+1*delk(k)*1;
end
for j = 1:nh
for ii = 1:ni
wih(j,ii)=wih(j,ii)+.5*delj(j)*double(x(ii,i));
end
wih(j,ii+1)=wih(j,ii+1)+1*delj(j)*1;
end
end
end
// The code above, I have written it to implement back propagation neural network, x is input , t is desired output, ni , nh, no number of input, hidden and output layer neuron. I am testing this for different functions like AND, OR, it works fine for these. But XOR is not working.
// Training x = [0 0 1 1; 0 1 0 1] // Training t = [0 1 1 0]
// who -> weight matrix from hidden to output layer
// wih -> weight matrix from input to hidden layer
// Can you help ?
  댓글 수: 3
Greg Heath
Greg Heath 2012년 1월 24일
If you initialized weights randomly, you could see if it is
an initialization problem.
Have you noticed the loop accidentally included in the backpropagation comment?
Greg
sultana albanhar
sultana albanhar 2022년 4월 15일
how this code work XOR?

댓글을 달려면 로그인하십시오.

채택된 답변

Greg Heath
Greg Heath 2012년 1월 25일
close all, clear all, clc
x = [0 0 1 1; 0 1 0 1]
t = [0 1 1 0]
[ni N] = size(x)
[no N] = size(t)
nh = 2
% wih = .1*ones(nh,ni+1);
% who = .1*ones(no,nh+1);
wih = 0.01*randn(nh,ni+1);
who = 0.01*randn(no,nh+1);
c = 0;
while(c < 3000)
c = c+1;
% %for i = 1:length(x(1,:))
for i = 1:N
for j = 1:nh
netj(j) = wih(j,1:end-1)*x(:,i)+wih(j,end);
% %outj(j) = 1./(1+exp(-netj(j)));
outj(j) = tansig(netj(j));
end
% hidden to output layer
for k = 1:no
netk(k) = who(k,1:end-1)*outj' + who(k,end);
outk(k) = 1./(1+exp(-netk(k)));
delk(k) = outk(k)*(1-outk(k))*(t(k,i)-outk(k));
end
% back propagation
for j = 1:nh
s=0;
for k = 1:no
s = s + who(k,j)*delk(k);
end
delj(j) = outj(j)*(1-outj(j))*s;
% %s=0;
end
for k = 1:no
for l = 1:nh
who(k,l) = who(k,l)+.5*delk(k)*outj(l);
end
who(k,l+1) = who(k,l+1)+1*delk(k)*1;
end
for j = 1:nh
for ii = 1:ni
wih(j,ii) = wih(j,ii)+.5*delj(j)*x(ii,i);
end
wih(j,ii+1) = wih(j,ii+1)+1*delj(j)*1;
end
end
end
h = tansig(wih*[x;ones(1,N)])
y = logsig(who*[h;ones(1,N)])
e = t-round(y)
Hope this helps.
Greg
  댓글 수: 5
Shantanu Arya
Shantanu Arya 2020년 9월 10일
Greg Heath, Why Doesn't this code work for 3 input XOR ?? If I replace the X and Y with 3 inputs then the error does not converge to 0 !!
Greg Heath
Greg Heath 2020년 9월 10일
  1. I ALWAYS use the bipolar tanh in hidden layers. It ALWAYS works.
  2. What are x and t for 3 input XOR???
Greg

댓글을 달려면 로그인하십시오.

추가 답변 (5개)

Greg Heath
Greg Heath 2012년 1월 27일
It is well known that successful deterministic training depends on a lucky choice of initial weights. The most common approach is to use a loop and create Ntrial (e.g., 10 or more) nets from different random initial weights. Then choose the best net.
It is also well known that an odd bounded monotonically increasing activation function like TANSIG is the choice of preference for hidden layers because it does not restrict the polarity of the layer variables. It works even better when the input is shifted to have zero mean.
You can check the superiority of TANSIG and zero-mean yourself. You can also search the comp.ai.neural-nets FAQ and archives to find both agreement and numerical experiments.
For most real world problems the best choice for number of hidden nodes, H, is not known apriori. That is why I have posted many examples using a double loop: An outer loop over H and an inner loop over Ntrials random weight initializations. For examples, search the newsgroup using the keywords
heath clear Ntrials
Hope this helps.
Greg
  댓글 수: 3
Imran Babar
Imran Babar 2013년 5월 6일
Thank you very much a nice example of MLP BP NN and very easy to understand. Though I am a novice in this field but I am now clear in the programming idea
Havot Albeyboni
Havot Albeyboni 2020년 12월 9일
편집: Havot Albeyboni 2020년 12월 9일
can anyone please explain this line ???
netj(j) = wih(j,1:end-1)*x(:,i)+wih(j,end);
shouldnt it be i.e ->> Y3=sigmoid(X1W13+ X2W23 - θ3 )
rgrds

댓글을 달려면 로그인하십시오.


Imran Babar
Imran Babar 2013년 5월 8일
Dear sir I want to use the same code for the following data set
Input dataset=[1 1 1 2;1 1 2 2;1 2 2 2; 2 2 2 2] Output=[5 6 7 8]
but it is always generating output as given below
1 1 1 1
I tried my best but unable to understand how may I get these results
  댓글 수: 1
Greg Heath
Greg Heath 2013년 5월 10일
Your outputs are not within the range of logsig.
Either normalize your outputs to fit in {0,1)
or
change your output activation function (e.g., 'purelin')

댓글을 달려면 로그인하십시오.


Sohel Ahammed
Sohel Ahammed 2015년 7월 4일
Ok. If i Want to test it, how i have to change. Ex: input : 1 0 expected output : 1 (From learing).

dsmalenb
dsmalenb 2018년 10월 17일
Am I missing something here but I don't see any bias neurons. Maybe this is why you are getting some inputs to work and others not?
  댓글 수: 5
dsmalenb
dsmalenb 2018년 10월 17일
I'm sorry but that statement does not make much sense to me. Biases are added to shift the values within the activation function. Multiplying by 1 does nothing.
Greg Heath
Greg Heath 2018년 11월 9일
The 1 is a placeholder which is multiplied by a learned weight.
Hmm, I've been using that notation for decades and this is the 1st question re that that I can remember.
Greg

댓글을 달려면 로그인하십시오.


sultana albanhar
sultana albanhar 2022년 4월 14일

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by