Fine-tune mistral-7B on 3090s, a100s, h100s
Haystack and Mistral 7B RAG Implementation. It is based on completely open-source stack.
This is a demo-mistral-7b-instruct-v0.1 model starter template from Banana.dev that allows on-demand serverless GPU inference.
Visualize the intermediate output of Mistral 7B
Question Answer Generation App using Mistral 7B, Langchain, and FastAPI.
An NVIDIA AI Workbench example project for fine-tuning a Mistral 7B model
This is a RAG implementation using Open Source stack. BioMistral 7B has been used to build this app along with PubMedBert as an embedding model, Qdrant as a self hosted Vector DB, and Langchain & Llam...
Local PDF Chat Application with Mistral 7B LLM, Langchain, Ollama, and Streamlit
Finetune mistral-7b-instruct for sentence embeddings
#大语言模型#Official inference library for Mistral models
RAG (Retrievel Augmented Generation) implementation using the Mistral-7B-Instruct-v0.1
#计算机科学#Lightweight inference library for ONNX files, written in C++. It can run Stable Diffusion XL 1.0 on a RPI Zero 2 (or in 298MB of RAM) but also Mistral 7B on desktops and servers. ARM, x86, WASM, RISC-...
qwen-7b and qwen-14b finetuning
Workflow Service for OpenStack. Mirror of code maintained at opendev.org.
#自然语言处理#A large-scale 7B pretraining language model developed by BaiChuan-Inc.
Finetune LLaMA-7B with Chinese instruction datasets
Tutorial on training, evaluating LLM, as well as utilizing RAG, Agent, Chain to build entertaining applications with LLMs.分享如何训练、评估LLMs,如何基于RAG、Agent、Chain构建有趣的LLMs应用。
Guide for fine-tuning Llama/Mistral/CodeLlama models and more
Mistral: A strong, northwesterly wind: Framework for transparent and accessible large-scale language model training, built with Hugging Face 🤗 Transformers.
翻译 - Mistral:强劲的西北风:透明且可访问的大规模语言模型训练框架,由 Hugging Face 🤗 Transformers 构建。
#大语言模型#Official release of InternLM2.5 base and chat models. 1M context support
#自然语言处理#通义千问-7B(Qwen-7B) 是阿里云研发的通义千问大模型系列的70亿参数规模的模型
Ocean simulation based on Tessendorf's FFT technique and Gerstner wave. Stockham formulation. White cap.
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
Finetuning of Falcon-7B LLM using QLoRA on Mental Health Conversational Dataset
Command-line script for inferencing from models such as MPT-7B-Chat
The "vicuna-installation-guide" provides step-by-step instructions for installing and configuring Vicuna 13 and 7B