segmentObjects
Syntax
Description
segments objects within a single image or array of images masks
= segmentObjects(detector
,I
)I
using
SOLOv2 instance segmentation, and returns the predicted object masks for the input image
or images.
Note
This functionality requires Deep Learning Toolbox™ and the Computer Vision Toolbox™ Model for SOLOv2 Instance Segmentation. You can install the Computer Vision Toolbox Model for SOLOv2 Instance Segmentation from Add-On Explorer. For more information about installing add-ons, see Get and Manage Add-Ons.
[___] = segmentObjects(___,
specifies options using additional name-value arguments in addition to any combination of
arguments from previous syntaxes.. For example, Name=Value
)Threshold=0.9
specifies
the confidence threshold as 0.9
.
Examples
Segment Instances of Objects
Create a pretrained SOLOv2 instance segmentation network.
model = solov2("light-resnet18-coco");
Read a test image that includes objects that the network can detect, such as dogs, into the workspace.
I = imread("kobi.png");
Segment instances of objects in the image using the SOLOv2 instance segmentation model.
[masks,labels,scores] = segmentObjects(model,I);
Display the instance segmentation results. Overlay the detected object instance mask on the test image.
overlayedImage = insertObjectMask(I,masks); imshow(overlayedImage)
Segment Instances of Objects in Image Datastore with SOLOv2
Load a pretrained SOLOv2 instance segmentation network.
model = solov2("resnet50-coco");
Create a datastore of test images.
imageFiles = fullfile(toolboxdir("vision"),"visiondata","visionteam*.jpg"); dsTest = imageDatastore(imageFiles);
Segment instances of objects using the SOLOv2 instance segmentation model.
dsResults = segmentObjects(model,dsTest,Threshold=0.55);
Running SoloV2 network -------------------------- * Processed 2 images.
For each test image, display the instance segmentation results. Overlay the detected object masks on the test image.
while(hasdata(dsResults)) testImage = read(dsTest); results = read(dsResults); maskColors = lines(numel(results{2})); figure overlayedImage = insertObjectMask(testImage,results{1},Color=maskColors); imshow(overlayedImage) end
Input Arguments
detector
— SOLOv2 instance segmentation object
solov2
object
SOLOv2 instance segmentation model, specified as a solov2
object.
I
— Image or batch of images
numeric matrix | numeric array
Image or batch of images on which to perform instance segmentation, specified as one of these values.
Image Type | Data Format |
---|---|
Single grayscale image | 2-D matrix of size H-by-W |
Single color image | 3-D array of size H-by-W-by-3. |
Batch of B grayscale or color images | 4-D array of size H-by-W-by-C-by-B. The number of color channels C is 1 for grayscale images and 3 for color images. |
The height H and width W of each image must be greater than or equal to the input height h and width w of the network.
imds
— Datastore of images
datastore
Datastore of images, specified as a datastore such as an ImageDatastore
or CombinedDatastore
object.
If calling the datastore with the read
function returns a cell array, then the image data must be in the
first cell.
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN
, where Name
is
the argument name and Value
is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Example: segmentObjects(detector,I,Threshold=0.9)
specifies the
confidence threshold as 0.9
.
Threshold
— Confidence threshold
0.5
(default) | numeric scalar in range [0, 1]
Confidence threshold, specified as a numeric scalar in the range [0,
1]
. The segmentObjects
function filters out predictions
with confidence scores less than the threshold value. Increase this value to reduce
the number of false positives, at the possible expense of missing some true
positives.
MaskThreshold
— Mask probability threshold
0.5
(default) | numeric scalar in range [0, 1]
Mask probability threshold, specified as a numeric scalar in the range
[0, 1]
. The mask probability threshold is the threshold value for
the mask probabilities, determined by an output activation function, that separate
object mask pixels from background pixels. If the threshold is too high, the function
might incorrectly classify some foreground object pixels as background pixels,
reducing the accuracy of the segmentation.
SelectStrongest
— Select strongest mask prediction
true
or 1
(default) | false
or 0
Select the strongest mask prediction for each segmented object instance using
non-maximum suppression, specified as a numeric or logical 1
(true
) or 0
(false
).
true
— Return the strongest object mask prediction per object. ThesegmentObjects
function selects these predictions by using non-maximum suppression to eliminate overlapping bounding boxes based on their confidence scores.false
— Return all predictions. You can then create a custom operation to eliminate overlapping object masks.
Acceleration
— Network acceleration type
"auto"
(default) | "mex"
| "none"
Network acceleration type to use for performance optimization, specified as one of these options:
"auto"
— Automatically select optimizations suitable for the input network and environment."mex"
— Compile and execute a MEX function. This option is available when using a GPU only. Using a GPU requires Parallel Computing Toolbox™ and a CUDA® enabled NVIDIA® GPU. If Parallel Computing Toolbox or a suitable GPU is not available, then the function returns an error. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox)."none"
— Disable all acceleration.
Use network acceleration to improve performance when using the same instance segmentation network and segmentation parameters across multiple image inputs, at the expense of additional overhead on the initial function call, and a possible increase in memory usage.
ExecutionEnvironment
— Hardware resource
"auto"
(default) | "gpu"
| "cpu"
Hardware resource on which to process images with the network, specified as one of the execution environment options in this table.
ExecutionEnvironment | Description |
---|---|
"auto" | Use a GPU if available. Otherwise, use the CPU. The use of a GPU requires Parallel Computing Toolbox and a CUDA enabled NVIDIA GPU. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox). |
"gpu" | Use the GPU. If a suitable GPU is not available, the function returns an error message. Using a GPU requires Parallel Computing Toolbox and a CUDA enabled NVIDIA GPU. If Parallel Computing Toolbox or a suitable GPU is not available, then the function returns an error. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox). |
"cpu" | Use the CPU. |
MaxNumKernels
— Maximum number of convolution kernels
"auto"
(default) | positive integer in range [1, 3096]
Maximum number of convolution kernels, specified as a positive integer in the range [1, 3096]. This value sets the maximum number of convolution kernels, or filters, that the SOLOv2 network uses to perform a convolution operation for producing segmentation masks.
Specify the value of MaxNumKernels
only when performing code
generation. Otherwise, use the default value. The default value,
"auto"
, sets the maximum number of kernels depending on the
content of the image, based on the number of kernels with an acceptable confidence
threshold.
Tip
Determine the optimal value of MaxKernelSize
for your
application by using the evaluateInstanceSegmentation
function to evaluate the network
performance at different MaxKernelSize
values. Increase the
value of MaxKernelSize
to increase instance segmentation
accuracy at the expense of slower inference speed.
MiniBatchSize
— Number of observations returned in each batch
1
(default) | positive integer
Number of observations returned in each batch, specified as a positive integer. If
you set a higher MiniBatchSize
, segmentation requires more memory,
which can cause errors if your system does not have sufficient memory.
You can specify this argument only when you specify a batch of images,
I
, or a datastore of images, imds
, as an
input to the segmentObjects
function.
WriteLocation
— Location to store writable data
string scalar | character vector
Location to store writable data, specified as a string scalar or character vector.
The specified folder must have write permissions. If the folder already exists, the
segmentObjects
function creates a new folder and adds a suffix
to the folder name with the next available number. The default write location is
fullfile(pwd,"SegmentObjectResults")
, where
pwd
is the current working directory.
You can specify this argument only when you specify a datastore of images,
imds
, as an input to the segmentObjects
function.
Data Types: char
| string
NamePrefix
— Prefix added to written filenames
"segmentObj"
(default) | string scalar | character vector
Prefix added to written filenames, specified as a string scalar or character
vector. The function names the output files
NamePrefix_imageName.mat
, where
imageName
is the name of the input image
without its file extension.
You can specify this argument only when you specify a datastore of images,
imds
.
Data Types: char
| string
Verbose
— Visible progress display
true
or 1
(default) | false
or 0
Visible progress display, specified as a numeric or logical 1
(true
) or 0
(false
).
You can specify this argument only when you specify a datastore of images,
imds
.
Output Arguments
masks
— Object masks
H-by-W-by-M logical
array | B-by-1 cell array
Object masks, returned as an
H-by-W-by-M logical array for
a single image or a B-by-1 cell array for a batch of
B images. H and W are the
height and width, respectively, of the input image I
, and
M is the number of objects masks predicted in the image. Each of
the M channels contains the mask for a single predicted object
instance.
For a batch of B images, each cell of the B-by-1 cell array contains an H-by-W-by-M array of object masks the corresponding image from the batch.
labels
— Object labels
M-by-1 categorical vector | B-by-1 cell array
Objects labels, returned as an M-by-1 categorical vector for a
single image or a B-by-1 cell array for a batch of
B images. M is the number of predicted object
instances in the input image I
.
For a batch of B images, each cell of the B-by-1 cell array contains an M-by-1 categorical vector with the labels of the objects in the corresponding image from the batch.
scores
— Prediction scores
M-by-1 vector | B-by-1 cell array
Prediction confidence scores, returned as an M-by-1 numeric
vector for a single image or a B-by-1 cell array for a batch of
B images. M is the number of predicted object
instances in the input image I
. A higher score indicates higher
confidence in the object instance segmentation.
For a batch of B images, each cell of the B-by-1 cell array contains an M-by-1 numeric vector with the confidence scores for the object segmentation predictions in the corresponding image from the batch.
dsResults
— Predicted instance segmentation results
FileDatastore
object
Predicted instance segmentation results, returned as a FileDatastore
object. The function organizes the datastore so that calling
the read
and readall
functions on it returns a cell array with three columns. This
table describes the format of each cell in each column.
masks | labels | scores |
---|---|---|
Binary masks, returned as a logical array of size H-by-W-by-M, where M is the number of predicted object instances in the corresponding image. Each mask is the segmentation of one object instance in the image. | Object class names, returned as an M-by-1 categorical vector, where M is the number of predicted object instances in the corresponding image. All categorical data returned by the datastore contains the same categories. | Prediction scores, returned as an M-by-1 numeric vector, where M is the number of predicted object instances in the corresponding image. |
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Usage notes and limitations:
For code generation, the
segmentObjects
function does not supportWriteLocations
,NamePrefix
, andVerbose
name-value arguments.For code generation, the
MiniBatchSize
name-value argument must be a code generation constant (coder.const()
).For code generation, the
MaxNumKernels
name-value argument must be specified as a non-default value.
GPU Code Generation
Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.
Usage notes and limitations:
For code generation, the
segmentObjects
function does not supportWriteLocations
,NamePrefix
, andVerbose
name-value arguments.For code generation, the
MiniBatchSize
name-value argument must be a code generation constant (coder.const()
).For code generation, the
MaxNumKernels
name-value argument must be specified must be specified as a non-default value.
Version History
Introduced in R2023bR2024b: Specify maximum number of convolution kernels to use for code generation
For code generation, the segmentObjects
function now requires you to
specify the maximum number of convolution kernels to use for mask prediction using the new
MaxNumKernels
name-value argument.
R2024b: Support for C and GPU code generation
segmentObjects
now supports the
generation of C code (requires MATLAB®
Coder™) and optimized CUDA code (requires GPU Coder™).
MATLAB 명령
다음 MATLAB 명령에 해당하는 링크를 클릭했습니다.
명령을 실행하려면 MATLAB 명령 창에 입력하십시오. 웹 브라우저는 MATLAB 명령을 지원하지 않습니다.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)