How can I train a Big Data (30k) using neural network fitting problem? or How to set mini batch?
조회 수: 1 (최근 30일)
이전 댓글 표시
Hi
I have a input vector of 518 numbers. For output there are 20 numbers. But I have 30 thousand of samples. I find that using Bayesian Regularization can have a good performance. But when I handel this huge amount of samples, I find that it is too slow.
Is there anyway to sovle this problem?
I guess using deep learning tool box and set a mini batch could help. But I do not know how to do this?
댓글 수: 0
답변 (1개)
Prateek Rai
2021년 7월 29일
To my understanding, you are using Bayesian Optimization. You want to set a mini-batch to increase the speed of training. You can set the mini batch size using ‘MiniBatchSize’ name-value pair arguments of the ‘trainingOptions’ function in Deep Learning Toolbox. Moreover, you can also set a maximum number of epochs and options for data shuffling using the ‘MaxEpochs’ and ‘Shuffle’ as name-value pair arguments.
Please refer to trainingOptions MathWorks documentation page to find more on trainingOptions function. You can also refer to Deep Learning Using Bayesian Optimization MathWorks documentation page to learn more about applying Bayesian optimization to deep learning.
댓글 수: 0
참고 항목
카테고리
Help Center 및 File Exchange에서 Pattern Recognition and Classification에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!