#大语言模型#🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
#自然语言处理#A Unified Library for Parameter-Efficient and Modular Transfer Learning
#自然语言处理#An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
#自然语言处理#A plug-and-play library for parameter-efficient-tuning (Delta Tuning)
#自然语言处理#A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
#自然语言处理#Live Training for Open-source Big Models
#计算机科学#A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.
#大语言模型#Research Trends in LLM-guided Multimodal Learning.
#自然语言处理#Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning
K-CAI NEURAL API - Keras based neural network API that will allow you to create parameter-efficient, memory-efficient, flops-efficient multipath models with new layer types. There are plenty of exampl...
CodeUp: A Multilingual Code Generation Llama-X Model with Parameter-Efficient Instruction-Tuning
#Awesome#This Repository surveys the paper focusing on Prompting and Adapters for Speech Processing.
#自然语言处理#On Transferability of Prompt Tuning for Natural Language Processing
[CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"
#计算机科学#[arXiv] Cross-Modal Adapter for Text-Video Retrieval
#自然语言处理#Code for the ACL 2022 paper "Continual Sequence Generation with Adaptive Compositional Modules"
INTERSPEECH 23 - Refunction Whisper to recognize new tasks with adapters!
Code for EACL'23 paper "Udapter: Efficient Domain Adaptation Using Adapters"
CAMERO: Consistency Regularized Ensemble of Perturbed Language Models with Weight Sharing (ACL 2022)
This repository contains the source code for the paper "Grouped Pointwise Convolutions Reduce Parameters in Convolutional Neural Networks".