#计算机科学#Code and documentation to train Stanford's Alpaca models, and generate the data.
✨✨Latest Advances on Multimodal Large Language Models
#大语言模型#An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
#自然语言处理#An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
#计算机科学#An Open-sourced Knowledgable Large Language Model Framework.
#数据仓库#A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)
#大语言模型#This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & Vert...
#自然语言处理#A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.
#大语言模型#PhoGPT: Generative Pre-training for Vietnamese (2023)
#自然语言处理#Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).
#大语言模型#A collection of ChatGPT and GPT-3.5 instruction-based prompts for generating and classifying text.
[NeurIPS'23] "MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing".
#大语言模型#EVE Series: Encoder-Free Vision-Language Models from BAAI
#大语言模型#[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
#自然语言处理#[ICLR 2024] Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models
[EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models
#自然语言处理#Finetune LLaMA-7B with Chinese instruction datasets
EditWorld: Simulating World Dynamics for Instruction-Following Image Editing
[ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models