Prediction differs during training with result

조회 수: 3 (최근 30일)
Christian Huggler
Christian Huggler 2022년 5월 16일
댓글: Abhijit Bhattacharjee 2022년 5월 19일
I have twelve weighted classes that I train with a large augmented training and validation pixelLabelImageDatastore.
Created with:
lgraph=deeplabv3plusLayers(imageSize, numel(classes), 'resnet18');
lgraph = replaceLayer(lgraph, "classification", pixelClassificationLayer('Name','labels','Classes',tbl.Name,'ClassWeights',classWeights));
lgraph = replaceLayer(lgraph, "data", imageInputLayer(imageSize,"Name","data","Normalization","none"));
The training accuracy converges very fine to about 99,3% (98,5% - 99,7) and the loss to about 0.05 (for both training and validation).
When I test the generated DAGNetwork with "jaccard", only the first ten classes have high IOU, and the last 2 are zero! I also tested different normalizations such as zscore - always the same result. When I use the "predict" or "semanticseg" functions to check individual images, classes 11 and 12 seem to be poorly learned indeed.
But if I set a breakpoint in the "forwardLoss" function in "SpatialCrossEntropy.m" during the training and examine e.g. class 11 with "imshow(Y(:,:,11))", everything is fine learned!
What happens in "trainNetwork()" when the training is finished? Under what circumstances do forwardLoss() scores differ?
  댓글 수: 4
Christian Huggler
Christian Huggler 2022년 5월 19일
Does that mean that "trainNetwork()" is useless and that a separate training procedure has to be made?
Abhijit Bhattacharjee
Abhijit Bhattacharjee 2022년 5월 19일
There might be more specifics in your code that need to be addressed 1-1. I'd suggest submitting a technical support request.

댓글을 달려면 로그인하십시오.

답변 (0개)

카테고리

Help CenterFile Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기

제품


릴리스

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by