필터 지우기
필터 지우기

What is the search range of a hidden layer size in a regression task?

조회 수: 3 (최근 30일)
Andre
Andre 2015년 2월 18일
답변: Greg Heath 2015년 6월 28일
Searching for answers here in the forum, I read one answer by Greg that usually do:
"An outer loop H = Hmin:dH:Hmax over number of hidden nodes and an inner loop i = 1: Ntrials over number of random trn/val/tst data divisions and random weight initialization trials for each value of H."
What should be Hmin and Hmax ? I tried 0 and 20, it founds an answer at 20. So I grow the range to 30, and found an answer at 30. The MSEs between the first answer and the second results are similar (but the second has lower mse). Should I grow to 40 ?
When to stop ?

채택된 답변

Greg Heath
Greg Heath 2015년 6월 28일
[ I N ] = size(x)
[ O N ] = size(t)
Ntrn = N - 2*round(0.15)*N) % default
Ntrneq = Ntrn*O % No of training equations
% No. of unknown weights for an I-H-O node topology
Nw = (I+1)*H+(H+1)*O = (I+O+1)*H + O
% To prevent more unknowns than equations
Nw <= Ntrneq <=> H <= Hub % upper bound
Hub = floor( ( Ntrneq-O) / ( I + O + 1) )
% Depending on your prior information and the size of Hub,
% choose Hmin, dH and Hmax to search
h = Hmin:dH:Hmax % 0 <= Hmin <= Hmax <= Hub
For each h candidate, design multiple nets. To keep the task manageable, I usually search 10 h candidates at a time with Ntrials = 10 designs per candidate to obtain 100 designs. Sometimes it may be necessary to start with a wide search followed by one or more narrow searches.
Don't forget to choose a repeatable initial random number seed so that you can reproduce any individual design.
Choose the smallest H that satisfies your training goal. For example: if the degree-of-freedom adjusted training Rsquare, R2trna, is greater than 0.995 then 99.5% of the training target variance is modeled by the net.
I have posted zillions of examples in the NEWSGROUP and ANSWERS. Search with one or more of
greg neural h = Hmin:dH:Hmax Ntrials R2trna
Hope this helps.
Thank you for formally accepting my answer
Greg

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by