Determing learning rate and generalization rate in Deep learning toolbox

조회 수: 1 (최근 30일)
Daniel Vainshtein
Daniel Vainshtein 2019년 12월 28일
답변: Mahesh Taparia 2020년 1월 7일
Hello to all,
I have a given neural network, and I want to see how the learning rate and the generalization rate is changes due to the change of vairables in the architecture of the net such as: activision function, number of layers and the algorithm ( GD vs conjugate GD)
*the only thing I can extract is the accuracy, I don`t understand why we only can force the learning rate to be 0.1 in the example?
*Moreover I want the activision function to be sigmoid and the only close possibility is tanh, how can I add sigmoid as activision function?
*In addition I see only the training in the documintation, how i add the test samples and check the generalization efficency?
*and last question, I see that the only algorithm that relevant is SGDM-stochastic gardient descent with momentum, the advantage that we can choose our mini batch, but I don`t see SCGD-stochastic conjugate gardient descent option, how to add it as my algorithm why I train the net?
For seeing the options that the Deep learning toolbox is giving to me I used the documentations:

답변 (1개)

Mahesh Taparia
Mahesh Taparia 2020년 1월 7일
Hi Daniel
You can change the learning rate, by changing the value of 'InitialLearnRate',0.01 parameter specified here.
You can define custom sigmoid layer in MATLAB. You can refer to this link for that.
You can evaluate the performance of the model on the test data using classify command. You can refer the documentation here.
As of now, there are three optimization algorithms implemented which are used more frequently, viz. sgdm, adam and RMS Prop.

카테고리

Help CenterFile Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by