Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
翻译 - 对抗性鲁棒性工具箱(ART)-用于机器学习安全性的Python库-规避,中毒,提取,推理
#自然语言处理#TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
翻译 - TextAttack🐙是Python框架,用于NLP中的对抗性攻击,数据增强和模型训练
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
翻译 - Python工具箱可创建对抗示例,这些示例会欺骗PyTorch,TensorFlow,Keras等神经网络。