OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.
翻译 - OpenMMLab视频感知工具箱。它通过统一的框架支持单对象跟踪(SOT),多对象跟踪(MOT),视频对象检测(VID)。
[CVPR'23] Universal Instance Perception as Object Discovery and Retrieval
[ECCV'22 Oral] Towards Grand Unification of Object Tracking
[NeurIPS'21] Unified tracking framework with a single appearance model. It supports Single Object Tracking (SOT), Video Object Segmentation (VOS), Multi-Object Tracking (MOT), Multi-Object Tracking an...
#Awesome#This repository is a paper digest of Transformer-related approaches in visual tracking tasks.
[IEEE TCYB 2023] The first large-scale tracking dataset by fusing RGB and Event cameras.
[ECCV'22] The official PyTorch implementation of our ECCV 2022 paper: "AiATrack: Attention in Attention for Transformer Visual Tracking".
[CVPR-2024] The First High Definition (HD) Event based Visual Object Tracking Benchmark Dataset
Multiple Object Tracking System in Keras + (Detection Network - YOLO)
[CVPR'23] The official PyTorch implementation of our CVPR 2023 paper: "Generalized Relation Modeling for Transformer Tracking".
This is the official code for SiamAPN & SiamAPN++
A large-scale benchmark dataset for color-event based visual tracking
Official Implementation of Towards Sequence-Level Training for Visual Tracking (ECCV 2022)
Towards More Flexible and Accurate Object Tracking with Natural Language: Algorithms and Benchmark (CVPR 2021)
#计算机科学#Person Following Robot - Smart Trolley
Paper collection of rgb-infrared tracking algorithms.
[WACV 2024] Separable Self and Mixed Attention Transformers for Efficient Object Tracking
[BMVC 2023] Mobile Vision Transformer-based Visual Object Tracking
#计算机科学#🔦 Pytorch implementation of TrackNet
Modifications to improve single object tracking in 360° equirectangular videos.