필터 지우기
필터 지우기

Unexpected Bayesian Regularization performance

조회 수: 4 (최근 30일)
Jonathan Lowe
Jonathan Lowe 2020년 8월 22일
답변: Shubham Rawat 2020년 8월 28일
I'm training a network to learn the sin function from a noisy input of 400 samples.
If I use a 1-30-1 feedforwardnet with 'trainlm' the network generalises well. If I use a 1-200-1 feedforwardnet the network overfits the training data, as expected. My understanding was that 'trainbr' on a network with too many neurons will not overfit. However if I run trainbr on a 1-200-1 network until convergence (Mu reaches maximum), the given network seems to overfit the data despite a strong reduction in "Effective # Param".
To me this is a strange behaviour. Have I misunderstood bayesian regularization? Can someone provide an explanation?
I can post my code if necessary, however first I want to know if the following is correct:
'trainbr' will not overfit with large networks if run to convergence
Thanks
  댓글 수: 2
Greg Heath
Greg Heath 2020년 8월 22일
편집: Greg Heath 2020년 8월 22일
How many periods are covered by the 400 samples?
What minimum number of samples per period are necessary?
Greg
Jonathan Lowe
Jonathan Lowe 2020년 8월 23일
I use 100 samples per period and 4 periods.
x=-1:0.005:1;
y = sin(x*(4*pi))+0.25*randn(size(x));
trainbr is also 1-200-1 and runs to about 17 eff params. (the blue legend should read sin(4*pi*x))

댓글을 달려면 로그인하십시오.

답변 (1개)

Shubham Rawat
Shubham Rawat 2020년 8월 28일
Hi Jonathan,
Given your dataset and number of neurons it might be possible that your model is overfitting.
I have reproduced your code with 20 neurons and "trainbr" training funtion and it is giving me these results attached here. With Effective # Param = 18.6.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by