is it important to normalise the input to a neural network before training

조회 수: 1 (최근 30일)
I have a feature vector of the size 10000x400(400 samples) and target matrix is 40x400(40 classes).The input feature vecotr for each sample has 10,000 rows which have values like 0 123 212 242 123 45 etc.So I want ot ask that should I normalise all the elements in the rows by using the standard formula:
element of row=(element of row-mean(of column))/standard deviation (if same col).

채택된 답변

Greg Heath
Greg Heath 2016년 7월 16일
편집: Greg Heath 2016년 7월 27일
1. Delete and/or modify numerical outliers. Standardization of data to
zero-mean/unit-variance is the most effective way to do this.
2. Keep the ranges of all input and target vector components comparable to help
understand their relative importance.
3. Consider biases to be weights that act on unit vector components
4. Keep the initial scalar products of weights and vectors within the linear regions
of the sigmoids to avoid algebraic stagnation in the asymptotic regions.
5. Data scaling to [-1 1 ] is a MATLAB default. Standardization and no scaling are the
alternatives. Since you already have unscaled and standardarized data, you have a
variety of choices. My choice is to use the standardized data but accept the
[-1 1 ] default.
Why? ... because it is the easiest to code and understand.
Hope this helps.
Thank you for formally accepting my answer
Greg
  댓글 수: 2
Newman
Newman 2016년 7월 16일
What do you mean by regularised data and how to accept the default - 1 to 1 ?Also one more question ,should I normalise the image input before extracting the feature vector or should I normalise the feature vector itself?
Greg Heath
Greg Heath 2016년 7월 17일
1. My bad. zero-mean/unit-variance is STANDARDIZATION.
MATLAB:
help zscore
doc zscore
NNTOOLBOX:
help mapstd
doc mapstd
2. Defaults do not have to be accepted. They are what
the algorithm uses if an alternative is not specified.
3. Typically, feature vectors are combined to create
feature matrices so that inputs and outputs are
matrices. If you decide to not accept the MAPMINMAX
default, you can use
a. MAPSTD
b. '' % No normalization
Hope this helps.
Greg

댓글을 달려면 로그인하십시오.

추가 답변 (1개)

Walter Roberson
Walter Roberson 2016년 7월 15일
편집: Walter Roberson 2016년 7월 15일
Algebraically it is not important -- as long as you adjust your transfer functions appropriately. In practice, with floating point round-off and limited range, there could be some effects, which could be anywhere from minor to major, depending on your transfer functions.
Normalizing makes it a lot easier to compare the effects of different parameters. If A varies twice as much as B, is that because A is more important in determining the correlation, or is it because the range of A is higher and maybe A is actually less important? When you normalize then you do not have to think as much about how to interpret the results.

카테고리

Help CenterFile Exchange에서 Sequence and Numeric Feature Data Workflows에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by