Some questions on Dynamic Neural Network
조회 수: 3 (최근 30일)
이전 댓글 표시
Hello,
I am using Dynamic Neural Network for time series prediction and have some questions:
1. Should I compare the performance of different NN (in trial and error runs to find the best NN), when considering the Testing set or the whole set (Train, Validation and Testing)?
2. How can I find confidence interval for prediction? I found this question in another post, and Greg Heath used MSE as predication variance when transfer function are sigmoidal and backpropagation is used for learning ... I am not playing with those options, so my variance of prediction is equal to MSE?
3. I am training my NN in open loop format and then use the close loop format for multi step prediction. Does it make sense to use the same (train-validation- test) set that I use in open loop training for closed loop prediction? OR should I only use my testing set when using closed loop?
4. is there anyway to change number of hidden layers? I can only change number of hidden neurons with this command:
hiddenLayerSize=10 net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize)
Thanks,
댓글 수: 0
채택된 답변
Greg Heath
2015년 2월 11일
편집: Greg Heath
2016년 2월 28일
% 1. Should I compare the performance of different NN (in trial and error runs to find the best NN), when considering the Testing set or the whole set (Train, Validation and Testing)?
Typically, the slightly biased Validation
performance is used to rank multiple nets and
choose the best nets to use for estimating
unbiased summary statistics of the general
population.
Unbiased Test performances (of the nets chosen
via slightly biased validation performance) are
used to obtain the unbiased estimates of general
population statistics.
% 2. How can I find confidence interval for prediction? I found this question in another post, and Greg Heath used MSE as predication variance when transfer function are sigmoidal and backpropagation is used for learning ... I am not playing with those options, so my variance of prediction is equal to MSE?
I do not understand your terminology.
For arbitrary e
mse(e) = mean(e)^2 + var(e)
For good approximators, typically mean(e)=0 for error e. Then,
var = mse.
As explained above, the slightly biased
validation set performance is used to
rank and choose multiple nets whos
unbiased test set performances are
combined to estimate summary statistics
for the general population.
%3. I am training my NN in open loop format and then use the close loop format for multi step prediction. Does it make sense to use the same (train-validation- test) set that I use in open loop training for closed loop prediction? OR should I only use my testing set when using closed loop?
I recommend that the closed loop net be
tested on the same data used for openloop
design. If performance is unsatisfactory,
either
a. find a better openloop design to close
and/or
b. train the closeloop design
on the openloop data using the final
openloop weights as initial weights for
the closeloop training.
If successful, the closeloop design can be used to estimate future performance beyond the given data.
%4. is there anyway to change number of hidden layers? I can only change number of hidden neurons with this command: hiddenLayerSize=10 net = narxnet(inputDelays, feedbackDelays, hiddenLayerSize)
For 2 hidden layers
net = narxnet(X,T,[H1 H2]);
However, given enough data to support confident weight estimatation, 1 hidden layer is both a universal approximator and sufficient.
Hope this helps.
Greg
댓글 수: 2
Greg Heath
2015년 2월 12일
I am not familiar with the concept.
When searching for the "best net" I design multiple nets. I use summary statistics estimated from all designs who's validation set performance exceeds a certain threshold. However, the estimates are calculated from test set performance.
P.S. I ain't a statistician: I are a INJUNEER!
추가 답변 (0개)
참고 항목
카테고리
Help Center 및 File Exchange에서 Multivariate Normal Regression에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!