In evaluating a neural net, should NMSE be based only on test subset of data?

조회 수: 1 (최근 30일)
In answers like this, Greg Heath suggests using the normalized mean square error, NMSE, to compare the performance of different neural networks and pick the best one.
I have been calculating NMSE from all samples in the training set t and prediction y,
[net tr y e ] = train(net,x,t); % Train network
vart1 = var(t',1);
% MSE for a naive constant output model
% that always outputs average of target data
MSE00 = mean(vart1);
NMSE = mse(t-y)/MSE00; % Normalize
That includes the training samples, and so may favor models that fit the training data well but not new data. In order to choose the most robust model, should I calculate NMSE from the test samples only?
iTest = tr.testInd; % Index to the samples that were set aside for testing
NMSE_test_only = mse(t(:,iTest)-y(:,iTest))/MSE00; % Only use test samples

채택된 답변

Greg Heath
Greg Heath 2019년 5월 19일
For serious work I calulate FOUR values of NMSE:
1.70% Training
2.15% Validation
3.15% Test
4.100% All
for 10 (typically) random data divisions & initial weights and try to use as few hidden nodes as possible.
Hope this helps
Greg
  댓글 수: 2
KAE
KAE 2019년 5월 20일
편집: KAE 2019년 5월 21일
Once you have those 4 values of NMCE, do you pick the 'best' number of neurons (or whatever network feature you are optimizing) based on the net which has the highest test NMSE, averaged over the 10 trials?
Greg Heath
Greg Heath 2019년 5월 27일
Typically, I try to minimize the number of hidden nodes subject to the constraint NMSEtrn <= 0.01 . I then rank those nets according to NMSEval and NMSEtst.
Details can be found in my NEWSGROUP and ANSWERS posts.
Greg

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by