Prediction differs during training with result
    조회 수: 3 (최근 30일)
  
       이전 댓글 표시
    
I have twelve weighted classes that I train with a large augmented training and validation pixelLabelImageDatastore.
Created with: 
lgraph=deeplabv3plusLayers(imageSize, numel(classes), 'resnet18');
lgraph = replaceLayer(lgraph, "classification", pixelClassificationLayer('Name','labels','Classes',tbl.Name,'ClassWeights',classWeights));
lgraph = replaceLayer(lgraph, "data", imageInputLayer(imageSize,"Name","data","Normalization","none"));
The training accuracy converges very fine to about 99,3% (98,5% - 99,7) and the loss to about 0.05 (for both training and validation).
When I test the generated DAGNetwork with "jaccard", only the first ten classes have high IOU, and the last 2 are zero! I also tested different normalizations such as zscore - always the same result. When I use the "predict" or "semanticseg" functions to check individual images, classes 11 and 12 seem to be poorly learned indeed. 
But if I set a breakpoint in the "forwardLoss" function in "SpatialCrossEntropy.m" during the training and examine e.g. class 11 with "imshow(Y(:,:,11))", everything is fine learned!
What happens in "trainNetwork()" when the training is finished? Under what circumstances do forwardLoss() scores differ?
댓글 수: 4
  Abhijit Bhattacharjee
    
 2022년 5월 19일
				There might be more specifics in your code that need to be addressed 1-1. I'd suggest submitting a technical support request.
답변 (0개)
참고 항목
카테고리
				Help Center 및 File Exchange에서 Deep Learning Toolbox에 대해 자세히 알아보기
			
	Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!

