Back propagation learning of MLP doesnt convergate, why?

조회 수: 3 (최근 30일)
Jakub
Jakub 2013년 9월 3일
Hi, I am working on image pattern recognition in Matlab but for some certain reasons I cannot use nn toolbox (I have to rewrite it into labview for deployment). So I developed learning algorithm for 2 layers MLP but it always stucks in local minima. I tried Back Propagation Algorithm and genetic algorithm as well but both don't calculate weights good enough I tested it for simple data and few targets = both work perfect but for more parameters it's not good.
When I use nnstart IF with same data, I get perfect results but I am not able to rewrite it or use the calculated weights to get the network only represented by matrices. I use the same weights, transfer function, topology, data pre-procesing, still have different results.
Any suggestions that could be helpful in my suffering?
Thanks in advance.
Jakub
My code for learning:
% x - one sample % t - corresponding target % w1,w2 - weights % n - learning rate
% w_new_1,2 - new weights
function [w_new_1,w_new_2] = back_prop(x,t,w1,w2,n)
y_1 = logsig(x'*w1); %output for 1th layer
y_2 = logsig(y_1*w2); %output for 2nd layer
sigma_2=y_2.*(1-y_2).*(t(:,1)'-y_2); %error for 2nd layer
for i=1:length(sigma_2)
w_new_2(:,i)=w2(:,i)+(sigma_2(i).*y_1)'*n; %update weights between hidden layer and output
end
sigma_1=y_1.*(1-y_1).*(sigma_2*w_new_2'); %error for 1th layer
X_act=x;
for i=1:length(X_act)
w_new_1(i,:)=w1(i,:)+(sigma_1.*X_act(i))*n; %update weights between hidden layer and output
end
end

답변 (0개)

카테고리

Help CenterFile Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by