#Awesome#This repository is primarily maintained by Omar Santos (@santosomar) and includes thousands of resources related to ethical hacking, bug bounties, digital forensics and incident response (DFIR), artif...
翻译 - 该存储库主要由Omar Santos维护,并包含与道德黑客/渗透测试,数字取证和事件响应(DFIR),漏洞研究,漏洞利用开发,逆向工程等相关的数千种资源。
#大语言模型#🐢 Open-Source Evaluation & Testing for AI & LLM systems
A curated list of useful resources that cover Offensive AI.
#计算机科学#A list of backdoor learning resources
#大语言模型#a prompt injection scanner for custom LLM applications
RuLES: a benchmark for evaluating rule-following in language models
#大语言模型#Toolkits to create a human-in-the-loop approval layer to monitor and guide AI agents workflow in real-time.
A curated list of academic events on AI Security & Privacy
[CCS'24] SafeGen: Mitigating Unsafe Content Generation in Text-to-Image Models
Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and...
The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack...
#自然语言处理#Framework for testing vulnerabilities of large language models (LLM).
Code for "Adversarial attack by dropping information." (ICCV 2021)
Cyber-Security Bible! Theory and Tools, Kali Linux, Penetration Testing, Bug Bounty, CTFs, Malware Analysis, Cryptography, Secure Programming, Web App Security, Cloud Security, Devsecops, Ethical Hack...
#计算机科学#Train AI (Keras + Tensorflow) to defend apps with Django REST Framework + Celery + Swagger + JWT - deploys to Kubernetes and OpenShift Container Platform
#大语言模型#Performing website vulnerability scanning using OpenAI technologie
this.env defines, locks, and hashes the environment to establish a reliable and secure operational context. By detecting and responding to changes, it ensures consistency and integrity, especially for...
[NDSS'24] Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Time
#计算机科学#ATLAS tactics, techniques, and case studies data
#计算机科学#Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''