Meta AI, the artificial intelligence division of the tech giant Meta, has recently open-sourced a powerful new AI-driven model called the “Segment Anything Model” (SAM). Trained on 11 million images and 1.1 billion annotations, SAM is capable of broadly recognizing and segmenting any object within images or videos.
The model’s potential goes beyond merely serving as an image-editing tool. The demonstration of SAM’s capabilities reveals that it can be combined with other applications, such as real-time object recognition within a robot’s visual field, followed by an analysis of the specific content within the image.
While the full range of applications for this AI model has not yet been disclosed, Meta AI has referred to it as a means of promoting research into the fundamental models of computer vision. It is speculated that Meta AI is also researching other aspects of image recognition, with SAM serving as a crucial component for separating objects from images or video frames. This would align with Meta AI’s focus on advancing the foundations of computer vision.
Given the rapid pace of AI development, it is likely that this object segmentation technology will become increasingly widespread in the future.
Meta AI has launched a demo website for users to test the model at https://segment-anything.com/. The company currently offers three model sizes—2.4GB, 1.2GB, and 358MB—to cater to the needs of developers deploying the models locally.
For those interested in deploying the models locally, follow the instructions provided here: https://github.com/facebookresearch/segment-anything
The research paper on SAM can be accessed at https://arxiv.org/abs/2304.02643