✨✨Latest Advances on Multimodal Large Language Models
[CVPR2024] The code for "Osprey: Pixel Understanding with Visual Instruction Tuning"
LLaVA-Mini is a unified large multimodal model (LMM) that can support the understanding of images, high-resolution images, and videos in an efficient manner.
A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision, llama-3.2-vision, qwen-vl, qwen2-vl, phi3-v etc.
#大语言模型#A collection of visual instruction tuning datasets.
🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)
#大语言模型#[EMNLP 2024] A Video Chat Agent with Temporal Prior
#大语言模型#Gamified Adversarial Prompting (GAP): Crowdsourcing AI-weakness-targeting data through gamification. Boost model performance with community-driven, strategic data collection
[ECCV2024] Reflective Instruction Tuning: Mitigating Hallucinations in Large Vision-Language Models
Visual Instruction Tuning towards General-Purpose Multimodal Model: A Survey
#大语言模型#Mistral assisted visual instruction data generation by following LLaVA