#大语言模型#Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
#自然语言处理#中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
#大语言模型#🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
#计算机科学#Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
Using Low-rank adaptation to quickly fine-tune diffusion models.
#大语言模型#Use PEFT or Full-parameter to finetune 400+ LLMs (Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, ...) or 100+ MLLMs (Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL...
Meshtastic device firmware
翻译 - Meshtastic滑雪/远足/飞行/可定制的开放式GPS收音机的设备代码
ESP32/ESP8285-based High-Performance Radio Link for RC applications
翻译 - 基于STM32 / ESP32 / ESP8285的高性能四路无线链路
#大语言模型#Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
#大语言模型#33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU
#大语言模型#Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)
#大语言模型#Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJ...