Main Content

segmentAnythingModel

Pretrained Segment Anything Model (SAM) for semantic segmentation

Since R2024a

Description

Use the segmentAnythingModel object and its object functions to interactively segment objects in an image using visual prompts.

A segmentAnythingModel object configures a pretrained Segment Anything Model (SAM) for semantic segmentation of objects in an image without retraining the model. To learn more about the model and the training data, see the SA-1B Dataset page.

To initiate the segmentation workflow, you must first use the extractEmbeddings object function to extract the image embeddings from the SAM image encoder. To perform the segmentation, use the segmentObjectsFromEmbeddings object function to segment objects from the image embeddings using the image decoder.

Note

This functionality requires Deep Learning Toolbox™, Computer Vision Toolbox™, and the Image Processing Toolbox™ Model for Segment Anything Model. You can install the Image Processing Toolbox Model for Segment Anything Model from Add-On Explorer. For more information about installing add-ons, see Get and Manage Add-Ons.

Creation

Description

sam = segmentAnythingModel creates a pretrained Segment Anything Model that was trained on the Segment Anything 1 Billion (SA-1B) data set. To use this model to interactively segment objects in images using visual prompts, specify it to the extractEmbeddings object function.

example

Object Functions

extractEmbeddingsExtract feature embeddings from Segment Anything Model (SAM) encoder
segmentObjectsFromEmbeddingsSegment objects in image using Segment Anything Model (SAM) feature embeddings

Examples

collapse all

Create a Segment Anything Model (SAM) object for image segmentation.

sam = segmentAnythingModel;

Use the sam object to extract feature embeddings from an image in a segmentation workflow.

References

[1] Kirillov, Alexander, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, et al. "Segment Anything," April 5, 2023. https://doi.org/10.48550/arXiv.2304.02643.

Version History

Introduced in R2024a