PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
PyTorch code and models for the DINOv2 self-supervised learning method.
Reference PyTorch implementation and models for DINOv3
#计算机科学#[ICLR 2023] Official implementation of the paper "DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection"
[ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
#计算机科学#Accompanying code for Paperspace tutorial "Build an AI to play Dino Run"
Fine tuning grounding Dino
Automatic Chrome Dino Game
This is the third party implementation of the paper Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection.
Grounding DINO with Segment Anything & Stable Diffusion colab
#安卓#Chrome's Dino T-Rex game developed in Jetpack Compose
Chrome's t-rex based bootsector game (512 bytes) written in 16-bit x86 assembly (now with 8086 support!)
Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
Object Tracking with YOLOv5, CLIP, DINO and DeepSORT
DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding
Official implementation of OV-DINO: Unified Open-Vocabulary Detection with Language-Aware Selective Fusion
Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2