#计算机科学#Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
#自然语言处理#TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
#计算机科学#A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
#计算机科学#Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models....
#计算机科学#A Toolbox for Adversarial Robustness Research
#计算机科学#A pytorch adversarial library for attack and defense methods on images and graphs
#计算机科学#Raising the Cost of Malicious AI-Powered Image Editing
#计算机科学#🗣️ Tool to generate adversarial text examples and test machine learning models against them
#计算机科学#Implementation of Papers on Adversarial Examples
#计算机科学#Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
#Awesome#Adversarial attacks and defenses on Graph Neural Networks.
💡 Adversarial attacks on explanations and how to defend them
auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs
alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, 2023, and 2024)
#计算机科学#A curated list of awesome resources for adversarial examples in deep learning
#计算机科学#Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)
DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model
#计算机科学#PhD/MSc course on Machine Learning Security (Univ. Cagliari)
#计算机科学#A curated list of papers on adversarial machine learning (adversarial examples and defense methods).
A list of recent papers about adversarial learning