Blocking pixel label data for semantic segmentation DL training

조회 수: 5 (최근 30일)
Software Developer
Software Developer 2024년 6월 26일
답변: Ashish Uthama 2024년 7월 3일
I'm trying to block images and their pixel labels for training a unet. I can use a blockedImageDatastore for the input images, but I don't know how to get this blocking behavior from the pixelLabelDatastore that holds the expected labels. I can get the behavior myself by splitting all the images beforehand and saving them to disk, but I'd rather not have to deal with the file cleanup or lose the dynamic changing of blocking. Does anyone know a way to achieve this?

답변 (2개)

Malay Agarwal
Malay Agarwal 2024년 6월 27일
편집: Malay Agarwal 2024년 6월 27일
Please refer to the following link for an example on how to train a U-Net on multispectral images: https://www.mathworks.com/help/images/multispectral-semantic-segmentation-using-deep-learning.html
The example suggests using "blockedImage" to preprocess both your training samples and the labels. Specifically, you can refer to the following section of the example for sample code: https://www.mathworks.com/help/images/multispectral-semantic-segmentation-using-deep-learning.html#SemanticSegmentationOfMultispectralImagesExample-7.
In the code:
inputTileSize = [256 256];
bim = blockedImage(train_data(:,:,1:6),BlockSize=inputTileSize);
bLabels = blockedImage(labelsTrain,BlockSize=inputTileSize);
bmask = blockedImage(maskTrain,BlockSize=inputTileSize);
  • "bim" represents the first 6 channels of the training image, blocked using a block size of "[256 256]".
  • "bLabels" are the corresponding labels, blocked using the same block size.
  • "bmask" is the binary mask which represents the valid segmentation region, made using the 7th channel of the image and blocked using the same block size.
The example then finds the blocks of the image that overlap the mask using the following code:
overlapPct = 0.185;
blockOffsets = round(inputTileSize.*overlapPct);
bls = selectBlockLocations(bLabels, ...
BlockSize=inputTileSize,BlockOffsets=blockOffsets, ...
Masks=bmask,InclusionThreshold=0.95);
After one-hot encoding the labels, it then creates two "blockedImageDatastore" objects, one for the image and one for the labels. It uses the "BlockLocationSet" name-value argument to filter out only those image blocks and labels that overlap with the mask:
bimds = blockedImageDatastore(bim,BlockLocationSet=bls,PadMethod=0);
bimdsLabels = blockedImageDatastore(bLabels,BlockLocationSet=bls,PadMethod=0);
Finally, it combines the block images and the labels into a single datastore using the "combine" function. This combined datastore can be used to train the U-Net.
Please refer to the following links for more information:
Hope this helps!

Ashish Uthama
Ashish Uthama 2024년 7월 3일
Have not tried this - but instead of a pixelLabelDatastore, could you try to use another blockedImageDatastore to read the label data, and then use a transform() call to convert pixel data into label categories?

제품


릴리스

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by