what is the value for L2 regularization coeff, by default,while training?

while trainig a deep learning network in MATLAB, what is the trainingOptions for setting L2 regularization coeff.?
Like if adam optimizer is used how to set this parameter?
more clearly like in optim.Adam in pytorch , weight_decay option is given.how to set it here in MATLAB?

답변 (1개)

Adam Danz
Adam Danz 2020년 10월 21일

0 개 추천

I think you're looking for the L2Regularization Solver Option.

댓글 수: 6

Hi Adam M not sure though Actually I want to mimic a pytorch model.all other parameters are same this weight_decay m not able to figure out in matlab.
There in documentation they said it's for L2 regularizer coeff. Default value is 1.
What documentation (pytorch or matab)?
I assume you're referencing the TORCH.OPTIM.ADAM algorithm which uses a default vaue of 0 for the weight_decay. The L2Regularization property in Matlab's TrainingOptionsADAM which is the factor for L2 regularizer (weight decay), can also be set to 0. Or are you using a different method of training?
Ya I am talking about this only So these two parameters are same weight_decay there in torch.optim.adam And L2Regularization in matlab Adam training options right?
Well, I haven't used either of them so I don't want to insert too much certainty but their descriptions certainly seem the same.
There are a few ways you could verify this
  • Look at the code in Matlab and PyTorch to see how those two params are used.
  • Run the same data through both programs with the same inputs and examine the outputs (there are probably some random processes involved so you may not get exactly the same results).
  • If you have some kind of known dataset that produces a known result, sometimes provided in the text books, you could use that to verify that the L2Regularization term is doing what you think it does.
  • There's likely someone how there more familiar with both programs. Writing to Matlab Support may help but they probably wouldn't answer PyTorch questions.
Ya even I was thinking they are same,
for your second suggestion,,,,,,,,,,I m giving same input to both the models but getting entirely different accuaracy. In MATLAB 80% while in Pytorch they got only 50- 55%
though Pytorch has no option for padding='same' may be this is the issue.
I am confused what to do now?
Adam Danz
Adam Danz 2020년 10월 22일
편집: Adam Danz 2020년 10월 27일
If it were me, I'd step through the process in debug mode to study and compare the two methods. Perhaps there's more detail in Matalb's (and PyTorch's) documentation that would shed light on the differences.
Maybe the difference is caused by a small difference in one of the optional parameters, as you mentioned.

댓글을 달려면 로그인하십시오.

카테고리

도움말 센터File Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

질문:

2020년 10월 21일

편집:

2020년 10월 27일

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by