- Try a lower initial learning rate.
- Normalize the responses (the variable Y in your example) so that the maximum value is 1. You can use the normc function to do this.
Why appear NAN in the Mini-batch-loss and Mini-batch-RMSE when Train a Convolutional Neural Network for Regression
조회 수: 14 (최근 30일)
이전 댓글 표시
Iam used same code steps in following link but modified with my work
https://www.mathworks.com/help/nnet/examples/train-a-convolutional-neural-network-for-regression.html
traindata=rtrain_csiq;
Y = rscore;
testdata=utest_csiq;
layers = [ ...
imageInputLayer([256 256 1])
convolution2dLayer(12,25)
reluLayer
fullyConnectedLayer(1)
regressionLayer];
options = trainingOptions('sgdm','InitialLearnRate',0.001, ... 'MaxEpochs',15);
net = trainNetwork(traindata,Y,layers,options)
predictedTest = predict(net,testdata);
but the output as following
![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/166105/image.png)
pls how can solve that..Thanks
댓글 수: 0
답변 (1개)
Amy
2017년 8월 31일
Hi Ismail,
Sometimes this can happen if your data includes many regressors and/or large regression response values. This leads to larger losses that can become NaNs.
Two possible solutions:
댓글 수: 2
AlexanderTUE
2017년 9월 4일
Hi Amy, hi Ismail,
I has a similar problem in the past. It seems that the use of a single convolution connected layer is not enough for such big images sizes. I used three Conv layers with intial weigths. Please see the following QA https://de.mathworks.com/matlabcentral/answers/337587-how-to-avoid-nan-in-the-mini-batch-loss-from-traning-convolutional-neural-network
Alex
참고 항목
카테고리
Help Center 및 File Exchange에서 Image Data Workflows에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!