PyTorch implementation of adversarial attacks [torchattacks]
#计算机科学#An adversarial example library for constructing attacks, building defenses, and benchmarking both
翻译 - 一个对抗示例库,用于构建攻击,构建防御以及对两者进行基准测试
Pytorch implementation of convolutional neural network adversarial attack techniques
A non-targeted adversarial attack method, which won the first place in NIPS 2017 non-targeted adversarial attacks competition
Robust evasion attacks against neural network to find adversarial examples
Code for our CVPR 2018 paper, "On the Robustness of Semantic Segmentation Models to Adversarial Attacks"
Must-read Papers on Textual Adversarial Attack and Defense
A curated list of adversarial attacks and defenses papers on graph-structured data.
An Open-Source Package for Textual Adversarial Attack.
#自然语言处理#TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
翻译 - TextAttack🐙是Python框架,用于NLP中的对抗性攻击,数据增强和模型训练
Adversarial attacks and defenses on Graph Neural Networks.
Adversarial attacks on Deep Reinforcement Learning (RL)
Adversarial Attacks on Node Embeddings via Graph Poisoning
A targeted adversarial attack method, which won the NIPS 2017 targeted adversarial attacks competition
💡 Adversarial attacks on explanations and how to defend them
Code for ICML 2019 paper "Simple Black-box Adversarial Attacks"
Circumventing the defense in "Ensemble Adversarial Training: Attacks and Defenses"
A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"
🔥🔥Defending Against Deepfakes Using Adversarial Attacks on Conditional Image Translation Networks
Code for "Adversarial Camouflage: Hiding Physical World Attacks with Natural Styles" (CVPR 2020)
transfer attack; adversarial examples; black-box attack; unrestricted Adversarial Attacks on ImageNet; CVPR2021 天池黑盒竞赛
Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
Code for "Black-box Adversarial Attacks with Limited Queries and Information" (http://arxiv.org/abs/1804.08598)