The simplest, fastest repository for training/finetuning medium-sized GPTs.
Scikit-learn style model finetuning for NLP
PyTorch native finetuning library
#大语言模型#LLM Finetuning with peft
Code and model for the paper "Improving Language Understanding by Generative Pre-Training"
Fine-tune CNN in Keras
Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.
Pytorch Bert Finetune in Chinese Text Classification
#计算机科学#Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
翻译 - 轻巧的ML研究人员专用的PyTorch包装器。缩放模型。写更少样板
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed
Finetune ModelScope's Text To Video model using Diffusers 🧨
Finetune LLaMA-7B with Chinese instruction datasets
Finetune SegmentAnything for Road Segmentation
baichuan LLM surpervised finetune by lora
How to finetune mbart using fairseq
ChatGLM2-6B微调, SFT/LoRA, instruction finetune
Finetune llama2-70b and codellama on MacBook Air without quantization
Finetune glide-text2im from openai on your own data.
Finetune baichuan pretrained model with QLora method
#安卓#Fine-tune the Whisper speech recognition model to support training without timestamp data, training with timestamp data, and training without speech data. Accelerate inference and support Web deployme...
Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)
Python script to preprocess images of all Pokémon to finetune ruDALL-E
最容易上手的0门槛 chatglm3 & agent & langchain 项目
基于bert finetune的文本分类
Fine-tune Facebook's DETR (DEtection TRansformer) on Colaboratory.