Why is there a difference in performance error using 'nntool' and 'nftool' when the properties assumed are same?

조회 수: 4 (최근 30일)
I have created a neural network in 'nftool' with 10 inputs and 20 outputs using 5283 samples and found that the architecture with 16 hidden layer neurons gives least performance error of 0.012. I have tried using same properties in 'nntool' and expected to get same error. These were the network properties:
Network Type: Feed-forward backprop
Input Ranges: Got from Input
Adaptation Function: LEARNGDM
Number of Layers: 2
Properties :
Layer 1 Layer 2
16 Neurons 20 Neurons
LOGSIG PURELIN
However, when I train this network I get a performance error of around 0.5. Why is the error much larger?

채택된 답변

Greg Heath
Greg Heath 2013년 4월 24일
The nets are initialized with random weights and, by default, random trn/val/tst data division. Sometimes they lead to a good performance and sometimes they do not.
1. Always initialize your RNG to a specified state before using the function CONFIGURE or, if that is not used, the function TRAIN. Then you can always reproduce what you have done.
2. Make Ntrials (e.g.,10) different designs in a loop. Initialize the RNG before the loop and record the state of the RNG at the beginning of each pass through the loop. Record the 10 trn/val/tst performances. Choose the one with the best val performance. Then get an unbiased estimate of nondesign data from the test data.
3. Sometimes the summary statistics (e.g., min, median, stdv, max) are the desired result.
Hope this helps.
Thank you for formally accepting my answer
Greg

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by