Normalizing data for neural networks
이전 댓글 표시
Hi,
I've read that it is good practice to normalize data before training a neural network.
There are different ways of normalizing data.
Does the data have to me normalized between 0 and 1? or can it be done using the standardize function - which won't necessarily give you numbers between 0 and 1 and could give you negative numbers.
Many thanks
채택된 답변
추가 답변 (4개)
Greg Heath
2012년 1월 11일
The best combination to use for a MLP (e.g., NEWFF) with one or more hidden layers is
1. TANSIG hidden layer activation functions
2. EITHER standardization (zero-mean/unit-variance: doc MAPSTD)
OR [ -1 1 ] normalization ( [min,max] => [ -1, 1 ] ): doc MAPMINMAX)
Convincing demonstrations are available in the comp.ai.neural-nets FAQ.
For classification among c classes, using columns of the c-dimensional unit matrix eye(c) as targets guarantees that the outputs can be interpreted as valid approximatations to input conditional posterior probabilities. For that reason, the commonly used normalization to [0.1 0.9] is not recommended.
WARNING: NEWFF automatically uses the MINMAX normalization as a default. Standardization must be explicitly specified.
Hope this helps.
Greg
댓글 수: 4
John
2012년 1월 11일
owr
2012년 1월 11일
I dont have access to the Neural Network Toolbox anymore, but if I recall correctly you should be able to generate code from the nprtool GUI (last tab maybe?). You can use this code to do your work without the GUI, customize it as need be, and also learn from it to gain a deeper understanding.
What I think Greg is referring to above is the fact that the function "newff" (a quick function to initialize a network) uses the built in normalization (see toolbox function mapminmax). If you want to change this, you'll have to make some custom changes. I dont recall if the nprtool uses newff - this can be verified by generating and viewing the code.
This is all from memory as I dont have access to the toolbox anymore - so take my comments as general guidelines, not as absolute.
Good luck.
John
2012년 1월 12일
Greg Heath
2012년 1월 13일
Standardization means zero-mean/unit-variance.
My preferences:
1. TANSIG in hidden layers
2. Standardize reals and mixtures of reals and binary.
3. {-1,1} for binary and reals that have bounds imposed by math or physics.
Hope this helps.
Greg
Greg Heath
2012년 1월 14일
1 개 추천
In general, if you decide to standardize or normalize, each ROW is treated SEPARATELY.
If you do this, either use MAPSTD, MAPMNMX, or the following:
[I N] = size(p)
%STANDARDIZATION
meanp = repmat(mean(p,2),1,N);
stdp = repmat(std(p,0,2),1,N);
pstd = (p-meanp)./stdp ;
%NORMALIZATION
minp = repmat(min(p,[],2),1,N);
maxp = repmat(max(p,[],2),1,N);
pn = minpn +(maxpn-minpn).*(p-minp)./(maxp-pmin);
Hope this helps
Greg
댓글 수: 4
John
2012년 1월 16일
fehmi zarzoum
2017년 5월 24일
hi, Undefined function or variable 'pmin'.
Greg Heath
2017년 5월 31일
Yeah, should be minp.
electronx engr
2017년 11월 4일
plz can u help me in this that after training with normalized data, how can I get the network (using gensim command) that works on unnormalized input, since I have created and trained the network using normalized input and output?
Sarillee
2013년 3월 25일
0 개 추천
y=(x-min(x))/(max(x)-min(x))
try this...
x is input....
y is the output...
댓글 수: 1
Greg Heath
2013년 5월 10일
Not valid for matrix inputs
Imran Babar
2013년 5월 8일
0 개 추천
mu_input=mean(trainingInput); std_input=std(trainingInput); trainingInput=(trainingInput(:,:)-mu_input(:,1))/std_input(:,1);
I hope this will serve your purpose
댓글 수: 2
Greg Heath
2013년 5월 10일
Not valid for matrix inputs
Abul Fujail
2013년 12월 12일
in case of matrix data, the min and max value corresponds to a column or the whole dataset. E.g. i have 5 input columns of data, in this case whether i should choose min/max for each column and do the normalization or min/max form all over the column and calculate.
카테고리
도움말 센터 및 File Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!