Poor results for neural-network based image segmentation
조회 수: 1 (최근 30일)
이전 댓글 표시
I am trying to segment images of animal embryos, like this one:
![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/1514809/image.png)
I would like to extract the entire oval-shaped embryo without getting any of the background or the appendages that stick off of the embryo, like in this hand-drawn mask:
![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/1514814/image.png)
I have around 350 training images of embryos that have been hand-segmented like this one, and I had trained a small convolutional neural-network to try and segment these images automatically. The network has this structure:
opts = trainingOptions('sgdm', ...
'InitialLearnRate',1e-3, ...
'MaxEpochs',5, ...
'MiniBatchSize',4);
numFilters = 64;
%%
filterSize = 3;
numClasses = 2;
layers = [
imageInputLayer([500 1000 1])
convolution2dLayer(filterSize,numFilters,'Padding','same')
reluLayer()
maxPooling2dLayer(2,'Stride',2)
convolution2dLayer(filterSize,numFilters,'Padding','same')
reluLayer()
transposedConv2dLayer(4,numFilters,'Stride',2,'Cropping','same');
convolution2dLayer(1,numClasses);
softmaxLayer()
pixelClassificationLayer()
];
Training the network for with the settings above leads to an accuracy of around 94%, but when I actually look at its performace on the training images, it is not doing a good job with removing the appendages:
![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/1514819/image.png)
This problem persists for most of the images in the training set, and I haven't even tested it on validation data because it's performing so poorly on the training set. I can't manually chop off the appendages via image erosion, because the angle and length of the appendages changes, so I would need to set the image erosion parameters manually for each image, and I have hundreds of thousands of images.
What can I do to improve performance of the pixel classification network?
Thank you!
댓글 수: 0
채택된 답변
Matt J
2023년 10월 18일
I can't manually chop off the appendages via image erosion
bwlalphashape from this FEX download,
may help.
A=imread('https://www.mathworks.com/matlabcentral/answers/uploaded_files/1514819/image.png');
A=im2gray(A);
B=imfill(A>135,'holes');
mask=~bwlalphaclose(~B,45);
imshow(imfuse(A,mask,'falsecolor'))
댓글 수: 5
Matt J
2023년 10월 19일
편집: Matt J
2023년 10월 19일
Here's a smoother version,
Images=["https://www.mathworks.com/matlabcentral/answers/uploaded_files/1514934/image.png",
"https://www.mathworks.com/matlabcentral/answers/uploaded_files/1514939/image.png",
"https://www.mathworks.com/matlabcentral/answers/uploaded_files/1514944/image.png",
"https://www.mathworks.com/matlabcentral/answers/uploaded_files/1514949/image.png"];
for i=1:numel(Images)
figure
getMask(Images{i});
end
function mask=getMask(Image)
A=imread(Image);
A=im2gray(A);
[m,n]=size(A);
B=bwareafilt(imfill(A<220,'holes'),1);
C=bwareafilt(~bwlalphaclose(~B,40),1);
b=bwboundaries(C); b=fliplr(b{1});
b=sgolayfilt(b,3,231,[],1);
mask=poly2mask(b(:,1), b(:,2),m,n);
mask=imerode(mask,strel('disk',21));
mask=imdilate(mask,strel('disk',22));
imshow(labeloverlay(A,mask,'Transparency',0.85,'Color','spring'));
end
추가 답변 (1개)
Matt J
2023년 10월 18일
Maybe increase the fitting capacity of the network. Add another encoder/decoder layer and/or increase the filter size?
댓글 수: 0
참고 항목
카테고리
Help Center 및 File Exchange에서 Image Data Workflows에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!