Back Propagation Neural Network

조회 수: 35 (최근 30일)
Tomaloka Chowdhury
Tomaloka Chowdhury 2013년 11월 4일
답변: FATIH GUNDOGAN 2021년 4월 23일
Hi.
I need a workable Back Propagation NN code. My Inputs are 100X3 dimension and outputs are 100X2 dimension.Sample size is 100.
For example 1st 5 samples are inputs [-46 -69 -82; -46 -69 -82; -46 -69 -82; -46 -69 -82; -46 -69 -82;... ] and outputs are [0 0;2 1;5 5;4 3; 3 5;...].
Please suggest me if BP is suitable for my problem and what learning technique and activation function will be better to solve this problem? Do I need to apply generalization? Kindly help me with the matlab code if possible. Thank you very much.

채택된 답변

Greg Heath
Greg Heath 2013년 11월 4일
Convert to matrices and transpose
[I N ] = size(inputs)
[ O N ] = size(targets)
Use fitnet for regression and curve-fitting
help fitnet
doc fitnet
Use patternnet for classification and pattern-recognition
For examples beyond the help/doc documentation try searching with
greg fitnet
greg patternnet
in both the NEWSGROUP and ANSWERS.

추가 답변 (2개)

Tomaloka Chowdhury
Tomaloka Chowdhury 2013년 11월 4일
편집: Tomaloka Chowdhury 2013년 11월 4일
Hi Greg,
Thank you so much for your response. Currently I am using below code and finding all of my obtained output are showing 0.5. Can you please tell, why is this happening? how can I obtain correct outputs?
...........................................................
function Network = backpropagation(L,n,m,smse,X,D) [P,N] = size(X); [Pd,M] = size(D); %%%%% INITIALIZATION PHASE %%%%% nLayers = length(L); % we'll use the number of layers often
%%Pre-allocation of the weight matrix between each layer w = cell(nLayers-1,1); % a weight matrix between each layer for i=1:nLayers-2 w{i} = [1 - 2.*rand(L(i+1),L(i)+1) ; zeros(1,L(i)+1)];
%w{i} = [1 - 2.*rand(L(i+2),L(i)+2); zeros(1,L(i)+1)];
end
w{end} = 1 - 2.*rand(L(end),L(end-1)+1);
% initialize stopping conditions mse = Inf; % assuming the intial weight matrices are bad epochs = 0; mtxmse = []; %%%%% PREALLOCATION PHASE %%%%%
% Activation:
a = cell(nLayers,1); % one activation matrix for each layer a{1} = [X ones(P,1)];
for i=2:nLayers-1
a{i} = ones(P,L(i)+1); % inner layers include a bias node (P-by-Nodes+1)
end
a{end} = ones(P,L(end)); % no bias node at output layer
% net input at node k of the ith layer for the jth sample
net = cell(nLayers-1,1); % one net matrix for each layer exclusive input
for i=1:nLayers-2; net{i} = ones(P,L(i+1)+1); % affix bias node end net{end} = ones(P,L(end));
% the sum of the weight matrix at layer i for all samples
prev_dw = cell(nLayers-1,1); sum_dw = cell(nLayers-1,1);
for i=1:nLayers-1 prev_dw{i} = zeros(size(w{i})); % prev_dw starts at 0 sum_dw{i} = zeros(size(w{i})); end %% FORWARD AND BACKWARD CALCULATION FOR EACH EPOCH
while mse < smse epochs >5000
% FEEDFORWARD PHASE: calculate input/output off each layer for all samples
for i=1:nLayers-1 net{i} = a{i} * w{i}'; % compute inputs to current layer
if i < nLayers-1 % inner layers
a{i+1} = [2./(1+exp(-net{i}(:,1:end-1)))-1 ones(P,1)];
else % output layers
a{i+1} = 2 ./ (1 + exp(-net{i})) - 1;
end
end
% calculate sum squared error of all samples
err = (D-a{end}); % save this for later sse = sum(sum(err.^2)); % sum of the error for all samples, and all nodes
% BACKPROPAGATION PHASE: calculate the modified error at the output layer:
for i=nLayers-1:-1:1 sum_dw{i} = n * delta' * a{i}; if i > 1 delta = (1+a{i}) .* (1-a{i}) .* (delta*w{i}); end end
% update the prev_w, weight matrices, epoch count and mse for i=1:nLayers-1
prev_dw{i} = (sum_dw{i} ./ P) + (m * prev_dw{i});
w{i} = w{i} + prev_dw{i};
end
epochs = epochs + 1;
mse = sse/(P*N); % mse = 1/P * 1/M * summed squared error O=a{3}; end
% Return the trained network Network.structure = L; %Layer Network.weights = w; %Weight Network.epochs = epochs; %Epoch Network.mse = mse; % Mean Square Error Network.O = O; Network.mtxmse = mtxmse; % Matrix of Mean Square Error
..................................................................
L=[3 3 2]; %Layer Architecture: Number of neurons in 3 different layers
N=.2; M=.5; smse=.01;

FATIH GUNDOGAN
FATIH GUNDOGAN 2021년 4월 23일
Can you explain your code structure? how you define variables and call the function?
function Network = backpropagation(L,n,m,smse,X,D)
[P,N] = size(X);
[Pd,M] = size(D);
%%%%% INITIALIZATION PHASE %%%%%
nLayers = length(L); % we'll use the number of layers often
%%Pre-allocation of the weight matrix between each layer
w = cell(nLayers-1,1); % a weight matrix between each layer
for i=1:nLayers-2
w{i} = [1 - 2.*rand(L(i+1),L(i)+1) ; zeros(1,L(i)+1)];
end
w{end} = 1 - 2.*rand(L(end),L(end-1)+1);
% initialize stopping conditions mse = Inf; % assuming the intial weight matrices are bad
epochs = 0;
mtxmse = [];
%%%%% PREALLOCATION PHASE %%%%%
% Activation:
a = cell(nLayers,1); % one activation matrix for each layer a{1} = [X ones(P,1)];
for i=2:nLayers-1
a{i} = ones(P,L(i)+1); % inner layers include a bias node (P-by-Nodes+1)
end
a{end} = ones(P,L(end)); % no bias node at output layer
% net input at node k of the ith layer for the jth sample
net = cell(nLayers-1,1); % one net matrix for each layer exclusive input
for i=1:nLayers-2;
net{i} = ones(P,L(i+1)+1); % affix bias node end net{end} = ones(P,L(end));
end
% the sum of the weight matrix at layer i for all samples
prev_dw = cell(nLayers-1,1); sum_dw = cell(nLayers-1,1);
for i=1:nLayers-1
prev_dw{i} = zeros(size(w{i})); % prev_dw starts at 0 sum_dw{i} = zeros(size(w{i})); end %% FORWARD AND BACKWARD CALCULATION FOR EACH EPOCH
end
while mse < smse epochs >5000
% FEEDFORWARD PHASE: calculate input/output off each layer for all samples
for i=1:nLayers-1
net{i} = a{i} * w{i}'; % compute inputs to current layer
if i < nLayers-1 % inner layers
a{i+1} = [2./(1+exp(-net{i}(:,1:end-1)))-1 ones(P,1)];
else % output layers
a{i+1} = 2 ./ (1 + exp(-net{i})) - 1;
end
end
% calculate sum squared error of all samples
err = (D-a{end}); % save this for later
sse = sum(sum(err.^2)); % sum of the error for all samples, and all nodes
% BACKPROPAGATION PHASE: calculate the modified error at the output layer:
for i=nLayers-1:-1:1
sum_dw{i} = n * delta' * a{i};
if i > 1
delta = (1+a{i}) .* (1-a{i}) .* (delta*w{i});
end
end
% update the prev_w, weight matrices, epoch count and mse for i=1:nLayers-1
prev_dw{i} = (sum_dw{i} ./ P) + (m * prev_dw{i});
w{i} = w{i} + prev_dw{i};
end
epochs = epochs + 1;
mse = sse/(P*N); % mse = 1/P * 1/M * summed squared error O=a{3}; end
% Return the trained network
Network.structure = L; %Layer
Network.weights = w; %Weight
Network.epochs = epochs; %Epoch
Network.mse = mse; % Mean Square Error
Network.O = O;
Network.mtxmse = mtxmse; % Matrix of Mean Square Error

카테고리

Help CenterFile Exchange에서 Sequence and Numeric Feature Data Workflows에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by