How to avoid Inf values when writing deep learning code?

조회 수: 4 (최근 30일)
ferda sonmez
ferda sonmez 2019년 3월 29일
Hi,
I wrote a deep learning code including the following Softmax function. During the training I start to get Inf values (and thus NaN values) in some matrix multiplication operations or as the result of softmax operation.
I also tried other softmax implementations which I found on the internet and books with no improvement.
Having these NaN values even in the first training epoch and in the very initial samples (such as in the 5. th sample) causes a false training of the model.
In order to simplfy my question I didn't add information related to the number of nodes in the input, output and hidden layers, cause I thing that this problem occurs independent of these numbers. If requested I may provide more info..
Best Regards,
Ferda Özdemir Sönmez
function y = Softmax(x)
ex = exp(x);
y = ex/sum(ex);
end

답변 (0개)

카테고리

Help CenterFile Exchange에서 Image Data Workflows에 대해 자세히 알아보기

제품


릴리스

R2018b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by