#大语言模型#Run Mixtral-8x7B models in Colab or consumer desktops
#大语言模型#A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI
#自然语言处理#中文Mixtral-8x7B(Chinese-Mixtral-8x7B)
中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)
Mixtral on Your Local Computer
#大语言模型#Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://gpt-docs.h2o.ai/
inference code for mixtral-8x7b-32kseqlen
LLM Siri with OpenAI, Perplexity, Ollama, Llama2, Mistral, Mixtral & Langchain
心理健康大模型、LLM、The Big Model of Mental Health、Finetune、InternLM2、InternLM2.5、Qwen、ChatGLM、Baichuan、DeepSeek、Mixtral、LLama3、GLM4、Qwen2、LLama3.1
Generate your Twitter bio with AI
An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & RingAttention)
#大语言模型#The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"
#大语言模型#Test your prompts, agents, and RAGs. Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with command ...
#大语言模型#Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such ...