Partially labelled semantic segmentation
이 질문을 팔로우합니다.
- 팔로우하는 게시물 피드에서 업데이트를 확인할 수 있습니다.
- 정보 수신 기본 설정에 따라 이메일을 받을 수 있습니다.
오류 발생
페이지가 변경되었기 때문에 동작을 완료할 수 없습니다. 업데이트된 상태를 보려면 페이지를 다시 불러오십시오.
이전 댓글 표시
0 개 추천
Hi, I'm trying to use partially labelled images as a groud truth for the semantic segmentation training(there are lots of ambiguous regions in my images, so i am hoping to train only with the apparent regions; by inputting the class weight 0(techinically it was 10^-20 as I cannot input zero) to the unlabeled class in pixel classification layer)
And I found that the mini-batch accuracy is fluctuates around 35-40% and never converges.
Are there any ways i could fix this problem?
Is it related to excessive non-labeled regions?
I would appreciate any of your advice concerning this problem.
채택된 답변
Raunak Gupta
2020년 10월 3일
0 개 추천
Hi,
Assigning zero class weight to unlabeled pixel will help in removing them from loss calculation during training however when the accuracy metrics are calculated after training, it will include all the pixels and thus giving random categories to unlabeled pixel (I am assuming there is more than 1 class in labeled pixels).
Instead I will recommend assigning a background category to unlabeled pixel and keeping the class weight same as labeled pixel so that network can assign proper categories to all the pixels (background pixel can also be trained). This way network will be more robust for all parts of the image. So, adding one more class named background to the previous classes may help you converge to a higher accuracy.
댓글 수: 7
byungchan
2020년 10월 3일
Thank you for your answers Gupta,
What I actually want to do is to classify the pixel labels for the ambiguous region(unlabelled region), using a network trained only with the apparent regions(labeled region).
However, if I assign a "background" label to the unlabelled regions, and train with those images, semantic segmentation results from the trained network would segment those ambiguous areas as a "background" rather than specific categories that I wish to verify(my goal is to see those ambiguous regions were classified as one of the categories that I have predefined).
So my question is this.
- When calculating the mini-batch accuracy during the network training, does it accounts for all the regions in the image? (i.e. if the network assigns certain categories(my case "wet sand", "dry sand") to the non labeled region("NaN" label), would it affect the mini-batch accuracy during training?)
- If that is the case, is there any possible way that I could see the mini-batch accuracy calculated only by using the labeled regions?
Thank you very much in advance!
Raunak Gupta
2020년 10월 3일
편집: Raunak Gupta
2020년 10월 3일
Hi,
I understand that you are trying to assign labels to unlabeled data in above formulation. However it is not possible to get information whether those labels are correct or not because there is no ground truth available for those pixels.
So, the mini batch accuracy will be calculated for all the pixels and if a category is NaN, it will be treated as miss classification. I think that is the reason for so low mini batch accuracy.
For the second question, while mentioning the Validation data, you can pass only those pixel in the image which are labeled and can ignore the rest of the ambigous pixels. Since you need to give X (input data) and Y (Response variable) into the ValidationData , It is possible to remove the unlabeled pixel using array indexing.
This will not change the mini batch accuracy, but you will get the desired Validation accuracy as only labeled data is passed as Validation data.
Hope this clarifies!
byungchan
2020년 10월 3일
Thank you very much, this has helped a lot!
byungchan
2020년 10월 9일
Hello Gupta, I was working on this after I've got your tips, and I've got several more things to ask which I hope you don't mind.
1.I don't know the reason why the training loss have high loss function(always converges over 0.5 after only around 5 epochs, even if I tried numerous trials with different initial rate and different learning rates) when training with partially labeled images.
I can understand that the training accuracy could be small, as it accounts for all the pixels in the image, but I still don't know the reason for this high loss function.
Do you have any clues for this? Or do you think it's enough as it shows convergence?
Also, I don't know how the Matlab calculates its cross-entropy loss for images with unlabeled regions.
Would the sum of loss functions for unlabeled regions calculated as zero in the Matlab?
2. In regards to question 1, and your advice which you've given previously, I tried to make the unlabeled region in the original image as NaN(it automatically becomes 0 and the unlabeled regions become black) and pass it through the trained network, using 'ValidationData'(though it is validation, I used the same image as with the training to know the "true" training accuracy).
But the accuracy is even lower than 30% which I think still accounts for the "unlabeled region."(This is clear that if I use the function "evaluateSemanticSegmentation" manually, the class accuracy for each label is mostly over 80-90%.) Loss function, on the other hand, is around zero which is quite surprising that I thought it would be around 0.5. (Hence, I am asking above question #1.)
So is there anyway I could fix this problem? What I really want to do is to verify the "true" training accuracy and loss function (which only accounts the labeled regions) in "real-time" during the training, so that I can validate the training with some confidence.
Thank you very much in advance and have a wonderful day!
Hi,
For both the queries I think I can conclude that the NaN (or simply unlabeled) pixels are used for calculating metric while doing the training in real time. I am not aware of any workflows as of now to exclude the unlabeled data from calculating the metric.
So, I would recommend checking the loss and accuracy through the custom functions you mentioned after any training run is finished. This way you can be sure at the end about the model performance.
For the first query about the convergence I would say that you can stop the training if loss has not been changing for 5-10 epochs since it is good sign that you have arrived to a local minimum in terms of loss.
byungchan
2020년 10월 9일
Thank you!
byungchan
2020년 10월 10일
After I've installed 2020b the accuracy problem has been resolved!
Thanks again!
추가 답변 (0개)
카테고리
도움말 센터 및 File Exchange에서 Semantic Segmentation에 대해 자세히 알아보기
참고 항목
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!웹사이트 선택
번역된 콘텐츠를 보고 지역별 이벤트와 혜택을 살펴보려면 웹사이트를 선택하십시오. 현재 계신 지역에 따라 다음 웹사이트를 권장합니다:
또한 다음 목록에서 웹사이트를 선택하실 수도 있습니다.
사이트 성능 최적화 방법
최고의 사이트 성능을 위해 중국 사이트(중국어 또는 영어)를 선택하십시오. 현재 계신 지역에서는 다른 국가의 MathWorks 사이트 방문이 최적화되지 않았습니다.
미주
- América Latina (Español)
- Canada (English)
- United States (English)
유럽
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
