In evaluating a neural net, should NMSE be based only on test subset of data?
조회 수: 3 (최근 30일)
이전 댓글 표시
In answers like this, Greg Heath suggests using the normalized mean square error, NMSE, to compare the performance of different neural networks and pick the best one.
I have been calculating NMSE from all samples in the training set t and prediction y,
[net tr y e ] = train(net,x,t); % Train network
vart1 = var(t',1);
% MSE for a naive constant output model
% that always outputs average of target data
MSE00 = mean(vart1);
NMSE = mse(t-y)/MSE00; % Normalize
That includes the training samples, and so may favor models that fit the training data well but not new data. In order to choose the most robust model, should I calculate NMSE from the test samples only?
iTest = tr.testInd; % Index to the samples that were set aside for testing
NMSE_test_only = mse(t(:,iTest)-y(:,iTest))/MSE00; % Only use test samples
댓글 수: 0
채택된 답변
Greg Heath
2019년 5월 19일
For serious work I calulate FOUR values of NMSE:
1.70% Training
2.15% Validation
3.15% Test
4.100% All
for 10 (typically) random data divisions & initial weights and try to use as few hidden nodes as possible.
Hope this helps
Greg
댓글 수: 2
Greg Heath
2019년 5월 27일
Typically, I try to minimize the number of hidden nodes subject to the constraint NMSEtrn <= 0.01 . I then rank those nets according to NMSEval and NMSEtst.
Details can be found in my NEWSGROUP and ANSWERS posts.
Greg
추가 답변 (0개)
참고 항목
카테고리
Help Center 및 File Exchange에서 Modeling and Prediction with NARX and Time-Delay Networks에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!