#计算机科学#Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
翻译 - 对抗性鲁棒性工具箱(ART)-用于机器学习安全性的Python库-规避,中毒,提取,推理
#自然语言处理#TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
翻译 - TextAttack🐙是Python框架,用于NLP中的对抗性攻击,数据增强和模型训练
#计算机科学#A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
翻译 - Python工具箱可创建对抗示例,这些示例会欺骗PyTorch,TensorFlow,Keras等神经网络。
#计算机科学#Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models....
#计算机科学#A Toolbox for Adversarial Robustness Research
#计算机科学#A pytorch adversarial library for attack and defense methods on images and graphs
#计算机科学#Raising the Cost of Malicious AI-Powered Image Editing
#计算机科学#🗣️ Tool to generate adversarial text examples and test machine learning models against them
#计算机科学#Implementation of Papers on Adversarial Examples
#计算机科学#Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
#Awesome#Adversarial attacks and defenses on Graph Neural Networks.
💡 Adversarial attacks on explanations and how to defend them
auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs
alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, 2023, and 2024)
#计算机科学#A curated list of awesome resources for adversarial examples in deep learning
#计算机科学#Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)
#计算机科学#A curated list of papers on adversarial machine learning (adversarial examples and defense methods).
DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model
#计算机科学#PhD/MSc course on Machine Learning Security (Univ. Cagliari)
Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural training.