Optimization of dimensions of hidden layer in neural network
조회 수: 3 (최근 30일)
이전 댓글 표시
Hello all
I want to optimize the number of neurons in 3 hidden layers which I've used in my neural network. Is there any way (apart from applying 3 nested for loops and checking the test performance for each of them ) so that I can know the optimized dimensions of all the three layers?
My Input vector is (208X200) and target is (5x200).
Please help me!
댓글 수: 0
채택된 답변
Greg Heath
2014년 5월 31일
편집: Greg Heath
2014년 5월 31일
There is no a priori way to optimize the number of hidden neurons for 1 hidden layer, much less 3. However, you can get a good estimate for the minimum number of the former via trial and error. Increasing the number of hidden layers tends to reduce the total number of hidden neurons. So, maybe a first step would be to design a single hidden layer model first.
A priori information can help, especially with classification where it is known that each class consists of a number of known subclasses. Then a divide and conquer approach can be followed. I have only used this with elliptical basis functions (most of the time with radial basis functions). A first step in this case could be the clustering of each class into subclasses. I can't say much more without revealing proprietary info.
Both clustering and principal component decompositions help understand the data. Look at those first before determining how to construct a divide and conquer approach.
Also take a look at cascade correlation.
댓글 수: 1
Greg Heath
2014년 6월 1일
I just noticed your input dimensions of [ 208 200 ]. If you use the default data division ratios Ntrn = 140.
Do you really expect to get reliable performance when you are trying to define a 208 dimensional space with 140 vectors?
Reduce the input dimensionality and/or get more data.
추가 답변 (0개)
참고 항목
카테고리
Help Center 및 File Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!