Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.
Segment Anything in High Quality [NeurIPS 2023]
Segment Anything for Stable Diffusion WebUI
#计算机科学# A Python package for segmenting geospatial data with the Segment Anything Model (SAM)
An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) for k...
Efficient vision foundation models for high-resolution generation and perception.
Effortless AI-assisted data labeling with AI support from YOLO, Segment Anything (SAM+SAM2), MobileSAM!!
#大语言模型# Caption-Anything is a versatile tool combining image segmentation, visual captioning, and ChatGPT, generating tailored captions with diverse controls for user preferences. https://huggingface.co/space...
#大语言模型# Must-have resource for anyone who wants to experiment with and build on the OpenAI vision API 🔥
streamline the fine-tuning process for multimodal models: PaliGemma, Florence-2, and Qwen2-VL
SAM-PT: Extending SAM to zero-shot video segmentation with point-based tracking.
#自然语言处理# 👁️ + 💬 + 🎧 = 🤖 Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]
Official PyTorch implementation of "TinySAM: Pushing the Envelope for Efficient Segment Anything Model"
A distilled Segment Anything (SAM) model capable of running real-time with NVIDIA TensorRT