Mapminmax process function causes that NN incorrectly simulates outputs
조회 수: 9 (최근 30일)
이전 댓글 표시
Hello,
I have a problem with some outputs from a trained custom neural network. I am using MATLAB 2014 with NN ToolBox ver 8.2.
I have created a simple feedforward NN for classification. I have used some Inputs and Targets, trained the NN, and tried to simulate Outputs given Inputs from the same range. The NN I created gives me incorrect outputs. First it returns outputs only with 0,1 range while the targets are in rage of -6 ... 3 and when I add a process function 'mapminimax' to input and output, the results are wrong: 1. when targets are 0 ... 1, there is an offset of 0.5 such that the outputs are 0.5 and 1 2. when targets are e.g. -6 ... 3, the outputs are around -3 ... 3
I am trying to understand what I am doing wrong.
PS. I have already asked this question in SO and also I provided some more details code: http://stackoverflow.com/questions/36449224/mapminmax-process-function-causes-that-nn-incorrectly-simulates-outputs
The code to test NN
clear all; close all; clc;
net = NNPatRec;
%net.inputs{1}.processFcns = {'mapminmax'};
%net.outputs{2}.processFcns = {'mapminmax'};
Inputs = -10:10;
%Targets = [-6*ones(1,11) 3*ones(1,10)];
Targets = [zeros(1,11) ones(1,10)];
[net,tr] = train(net,Inputs,Targets);
net(-10:10)
The code to create NN:
function net = NNPatternRecognition
net = nntest;
end
function net = nntest
net = network;
net.numInputs = 1;
net.numLayers = 2;
net.biasConnect = [1 1]';
net.inputConnect = [1; 0];
net.layerConnect = [0 0; 1 0];
net.outputConnect = [0 1];
% Inputs
%net.inputs{1}.processFcns = {'mapminmax'};
net.inputWeights{1}.learnFcn = 'learngdm';
% layers 1 (at input)
net.layers{1}.initFcn = 'initnw';
net.layers{1}.netInputFcn = 'netsum';
net.layers{1}.transferFcn = 'tansig';
net.layers{1}.size = 3;
% layers 2 (hidden)
net.layers{2}.initFcn = 'initnw';
net.layers{2}.netInputFcn = 'netsum';
net.layers{2}.transferFcn = 'purelin';
net.layers{2}.size = 1;
% Network functions
net.adaptFcn = 'adaptwb';
net.derivFcn = 'defaultderiv';
net.divideFcn = 'dividerand'; %'divideblock';
net.initFcn = 'initlay';
net.performFcn = 'crossentropy';
net.trainFcn = 'trainscg';
% Outputs
%net.outputs{2}.processFcns = {'mapminmax'};
%net.outputs{2}.exampleOutput = [0 1];
net.trainParam.showWindow = false;
net.trainParam.showCommandLine = true;
end
댓글 수: 0
채택된 답변
Greg Heath
2016년 4월 8일
close all, clear all, clc, plt=0, tic
x = -10:10; N = length(x)
trueind = 1 + [zeros(1,11) ones(1,10)];
t = full(ind2vec(trueind))
plt = plt+1, figure(plt), hold on
plot( x( 1:11), trueind( 1:11) ,'o' )
plot( x(12:21), trueind(12:21),'ro' )
axis([ -11 11 0 3 ])
title('CLASS INDICES')
rng('default')
net = patternnet;
[ net tr y e ] = train( net, x, t );
outind = vec2ind(y)
plot( x( 1:11), outind( 1:11) ,'x' ,'LineWidth',2)
plot( x(12:21), outind(12:21),'rx' ,'LineWidth',2)
err = outind~=trueind;
Nerr = sum(err) % 1
PctErr = 100*Nerr/N % 4.7619
Hope this helps.
For details, remove the semicolon to get
net = net
ALSO, for a trn/val/tst breakdown use
tr = tr
Hope this helps.
Thank you for formally accepting my answer
Greg
댓글 수: 3
Brendan Hamm
2016년 4월 12일
There should be exactly 2 rows in the output. The probability of Class_1 and the probability of Class_2. You just refer to these as Class_0 and Class_1 respectively.
추가 답변 (1개)
Brendan Hamm
2016년 4월 6일
You would likely have better luck if you just started with the patternnet which is meant for NN classification.
What I see that is wrong with your current implementation is you have a linear transferFcn for your second layer. For classification purposes this should really be a softmax function. That is change
net.layers{2}.transferFcn = 'purelin';
to
net.layers{2}.transferFcn = 'softmax';
There are also 2 functions used for the processFcns for the the input and output:
net.outputs{2}.processFcns == {'removeconstantrows', mapminmax};
net.inputs{1}.processFcns == {'removeconstantrows', mapminmax};
댓글 수: 4
Brendan Hamm
2016년 4월 7일
As a shameless plug, if you ever want your Machine Learning Algorithms to feel a little bit less "black-box", consider attending the Machine Learning with MATLAB course.
참고 항목
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!