Unexpected loss reduction using custom training loop in Deep Learning Toolbox
이전 댓글 표시
I have created a custom training loop following the documentation example: https://www.mathworks.com/help/releases/R2023a/deeplearning/ug/train-network-using-custom-training-loop.html
However, since I use the same loss function for training and validation, I have altered the "modelloss" function so the "forward" function is outside of the function. For example:
[Y, state] = forward(net, X)
[loss,gradient] = dlfeval(@modelLoss,net,Y,T);
function [loss,gradients] = modelLoss(net,Y,T)
% Calculate cross-entropy loss.
loss = crossentropy(Y,T);
% Calculate gradients of loss with respect to learnable parameters.
gradients = dlgradient(loss,net.Learnables);
end
Now the resulting loss during training is not reducing as expected. How can I resolve this issue?
채택된 답변
추가 답변 (0개)
카테고리
도움말 센터 및 File Exchange에서 Operations에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!