#大语言模型#Use PEFT or Full-parameter to finetune 450+ LLMs (Qwen2.5, InternLM3, GLM4, Llama3.3, Mistral, Yi1.5, Baichuan2, DeepSeek-R1, ...) and 150+ MLLMs (Qwen2.5-VL, Qwen2-Audio, Llama3.2-Vision, Llava, Inte...
#自然语言处理#聚宝盆(Cornucopia): 中文金融系列开源可商用大模型,并提供一套高效轻量化的垂直领域LLM训练框架(Pretraining、SFT、RLHF、Quantize等)
#自然语言处理#tensorflow를 사용하여 텍스트 전처리부터, Topic Models, BERT, GPT, LLM과 같은 최신 모델의 다운스트림 태스크들을 정리한 Deep Learning NLP 저장소입니다.
#自然语言处理#Code and datasets for "Character-LLM: A Trainable Agent for Role-Playing"
#Awesome#Awesome-RAG: Collect typical RAG papers and systems.
🚀LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training
#Awesome#Fine-Tuning Dataset Auto-Generation for Graph Query Languages.
#区块链#Elven Tools CLI - command line tool for launching NFTs collections on the MultiversX blockchain (Plus other tools).
LDPC MATLAB simulation using BPSK + AWGN modulation decoded using Sum Product and Min Sum Algorithm