In learning curve, training error decrease with increase training datasize.

조회 수: 6 (최근 30일)
I’ve learned and observed that training loss / error increases with training data size as stated in Dr Andrew Ng’s ML course.
I’ve recently experienced an anomaly. Training error and Test error curves were decreasing while training data size was increasing.
is this normal?
Some post said because regularization. in my case I use trainbr :Bayesian regularization backpropagation
is this reason?
Thank you.

채택된 답변

Shashank Gupta
Shashank Gupta 2021년 2월 5일
Hi,
These all figures boils down to number of learnable parameter v/s training data size. Regularization and all does have impact on the loss and yes it is possible that it might be the case. Also there are many other reasons, the graph which you plot describing the losses, are these optimal? does all hyperparameters are optimized properly? Prof. Andrew Ng talks about cases when optimality is reached. Now if you increase the training data. The optimal loss with same number of learnable parameter and more training data will be higher. It is a tradeoff. The explanation given in the link which you shared also make sense. There is no denial.
I hope my insight gave you enought help.
Cheers.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Classification에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by