#大语言模型#Finetune Llama 3.3, DeepSeek-R1 & Reasoning LLMs 2x faster with 70% less memory! 🦥
#大语言模型#:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, tr...
川虎 ChatGTP,为ChatGPT/ChatGLM/LLaMA等多种LLM提供了一个轻快好用的Web图形界面
#大语言模型#Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any...
#大语言模型#Firefly: 大模型训练工具,支持训练Qwen2.5、Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
An automated document analyzer for Paperless-ngx using OpenAI API, Ollama, Deepseek-r1, Azure and all OpenAI API compatible Services to automatically analyze and tag your documents.
#大语言模型#A snappy, keyboard-centric terminal user interface for interacting with large language models. Chat with ChatGPT, Claude, Llama 3, Phi 3, Mistral, Gemma and more.
#大语言模型#Documentation for Google's Gen AI site - including the Gemini API and Gemma
#大语言模型#Fully-featured web interface for Ollama LLMs
A collection of guides and examples for the Gemma open models from Google.
#自然语言处理#[ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data generation pipeline!
#大语言模型#Chat with AI large language models running natively in your browser. Enjoy private, server-free, seamless AI conversations.
#大语言模型#🏗️ Fine-tune, build, and deploy open-source LLMs easily!
InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies.
Unofficial PyTorch/🤗Transformers(Gemma/Llama3) implementation of Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention
#大语言模型#JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welcome).
Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).
#算法刷题#Local LLM Powered Recursive Search & Smart Knowledge Explorer