#大语言模型#LLaVA是一个具有 GPT-4V 级别功能的大语言和视觉模型助手
#自然语言处理#中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
#自然语言处理#中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
#大语言模型#Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use `llama2-wrapper` as your local llama2 backend for Generative Agents/Apps.
#自然语言处理#中文羊驼大模型三期项目 (Chinese Llama-3 LLMs) developed from Meta Llama 3
#大语言模型#[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichuan, TinyLlama, etc.
#自然语言处理#Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A
#大语言模型#kani (カニ) is a highly hackable microframework for chat-based language models with tool use/function calling. (NLP-OSS @ EMNLP 2023)
improve Llama-2's proficiency in comprehension, generation, and translation of Chinese.
#大语言模型#Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.
Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
#大语言模型#InsightSolver: Colab notebooks for exploring and solving operational issues using deep learning, machine learning, and related models.
LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces
#大语言模型#[KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model
Examples of RAG using LangChain with local LLMs - Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
#大语言模型#Chat to LLaMa 2 that also provides responses with reference documents over vector database. Locally available model using GPTQ 4bit quantization.
#大语言模型#Introducing Project Zephyrine: Elevating Your Interaction Plug and Play, and Employing GPU Acceleration within a Modernized Automata Local Graphical User Interface.