#大语言模型#We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to...
#自然语言处理#An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
#自然语言处理#A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
#自然语言处理#This repository is an AI Bootcamp material that consist of a workflow for LLM
Code for COLING22 paper, DPTDR: Deep Prompt Tuning for Dense Passage Retrieval
#大语言模型#This bootcamp is designed to give NLP researchers an end-to-end overview on the fundamentals of NVIDIA NeMo framework, complete solution for building large language models. It will also have hands-on ...
Comparison of different adaptation methods on PEFT for fine-tuning downstream tasks or benchmarks.
#大语言模型#Reproduce a prompt-learning method: P-Tuning V2, from the paper 《P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks》, model usage: Deberta + ChatGLM2, addi...