[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
A new multi-shot video understanding benchmark Shot2Story with comprehensive video summaries and detailed shot-level captions.
Multi-granularity Correspondence Learning from Long-term Noisy Videos [ICLR 2024, Oral]
Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges
#计算机科学#A Survey on video and language understanding.
#计算机科学#ACM Multimedia 2023 (Oral) - RTQ: Rethinking Video-language Understanding Based on Image-text Model
The official GitHub page for the survey paper "Self-Supervised learning for Videos: A survey"