#大语言模型#Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, a...
#大语言模型#🐢 Open-Source Evaluation & Testing for AI & LLM systems
the LLM vulnerability scanner
#大语言模型#[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).
#大语言模型#The Security Toolkit for LLM Interactions
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
#大语言模型#A secure low code honeypot framework, leveraging LLM for System Virtualization.
An easy-to-use Python framework to generate adversarial jailbreak prompts.
#大语言模型#A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.
#Awesome#Papers and resources related to the security and privacy of LLMs 🤖
#大语言模型#A security scanner for your LLM agentic workflows
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
🏴☠️ Hacking Guides, Demos and Proof-of-Concepts 🥷
#大语言模型#This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses
#大语言模型#Toolkits to create a human-in-the-loop approval layer to monitor and guide AI agents workflow in real-time.
Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to potentially execute offline remote code execution without running a...
AI-driven Threat modeling-as-a-Code (TaaC-AI)
The fastest Trust Layer for AI Agents
Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and...
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️