my neural network is giving same output for all inputs...do you have any idea why?

조회 수: 2 (최근 30일)
net=network(8,3,[1;1;1],[1 1 1 1 1 1 1 1;0 0 0 0 0 0 0 0;0 0 0 0 0 0 0 0],[0 0 0;1 0 0;0 1 0],[0 0 1]); net.layers{1}.transferFcn='logsig'; net.layers{2}.transferFcn='logsig'; net.layers{3}.transferFcn='logsig'; net.layers{2}.dimensions=10; net.trainFcn='traingd'; net.trainparam.min_grad=0.00001; net.trainparam.epochs=10000; net.trainparam.lr=0.3; net.trainparam.goal=0.0001; net=init(net); net.layers{1}.initFcn='initwb'; net.layers{2}.initFcn='initwb'; net.biases{1,1}.initFcn='rands'; net.biases{2,1}.initFcn='rands'; i=load('input.txt'); t=load('target.txt'); i=i'; t=t'; in=zeros(8,53); %normalized input tn=zeros(1,53); %normalized target
for r=1:8 %normalization of input min=i(r,1); max=i(r,1); for c=2:53 if i(r,c)<min min=i(r,c); end if i(r,c)>max max=i(r,c); end end for c=1:53 in(r,c)=0.1+(0.8*(i(r,c)-min)/(max-min)); end end
min=t(1); %normalization of target max=t(1); for c=2:53 if t(1,c)<min min=t(1,c); end if t(1,c)>max max=t(1,c); end end for c=1:53 tn(1,c)=0.1+(0.8*(t(1,c)-min)/(max-min)); end
net.divideFcn='divideblock'; net.divideParam.trainRatio = 0.85; net.divideParam.valRatio = 0.05; net.divideParam.testRatio = 0.1; net.performFcn='mse'; [net,tr]=train(net,in,tn); y=sim(net,in);
  댓글 수: 2
Greg Heath
Greg Heath 2013년 9월 10일
1. Why would you post a long code that will not run when cut and pasted into the command line because there is no sample data???
2. NEVER use MATLAB function names for your own variables (e.g., max and min)
2. When beginning to write a program it is smart to try to use all of the defaults of the functions and use MATLAB data that is most similar to yours.
help nndata
3. Once that runs you can begin to modify it to fit your original problem.
4. Cut and paste the program to make sure it runs or to obtain the error messages.
5. Post code that can be cut and pasted into the command line.
6. Include relevant error messages.
Hope this helps.
Greg

댓글을 달려면 로그인하십시오.

채택된 답변

Greg Heath
Greg Heath 2013년 9월 10일
1. There is no reason to use more than one hidden layer
2. You have created a net with 8 inputs instead of 1 8-dimensional input.
3. After creating a net view it using the command
view(net)
4. Why not just use fitnet?
help fitnet
5. After you rewrite your code you can test it on the 8-input/1-output chemical_dataset if you want to post further questions.
Hope this helps.
Thank you for formally accepting my answer
Greg
  댓글 수: 7
Devarshi Rai
Devarshi Rai 2013년 11월 1일
NNTBX ver- 6.0.4 I couldn't find any MATLAB dataset that adheres to my specifications.Input and target files are attached. It's classification- i am basically trying to determine the optimum neural network architecture{in terms of Hidden layer neurons for a 3-layered NN} by minimizing the average percentage error between predicted and target values. What do you mean by using default network properties and values? Shouldn't the neural network use these by itself?
Greg Heath
Greg Heath 2014년 3월 25일
That doesn't make much sense to me because it is usually more important to decrease large errors than it is to decrease small errors with high relative errors.

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

태그

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by