#大语言模型#Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, a...
#大语言模型#🐢 Open-Source Evaluation & Testing for AI & LLM systems
the LLM vulnerability scanner
#大语言模型#[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).
#大语言模型#The Security Toolkit for LLM Interactions
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
#大语言模型#A secure low code honeypot framework, leveraging AI for System Virtualization.
An easy-to-use Python framework to generate adversarial jailbreak prompts.
#Awesome#Papers and resources related to the security and privacy of LLMs 🤖
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
🏴☠️ Hacking Guides, Demos and Proof-of-Concepts 🥷
#大语言模型#This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses
#大语言模型#Toolkits to create a human-in-the-loop approval layer to monitor and guide AI agents workflow in real-time.
AI-driven Threat modeling-as-a-Code (TaaC-AI)
Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to potentially execute offline remote code execution without running a...
The fastest && easiest LLM security guardrails for CX AI Agents and applications.
Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and...
Framework for LLM evaluation, guardrails and security
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
#大语言模型#A benchmark for prompt injection detection systems.