#计算机科学#Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
翻译 - 对抗性鲁棒性工具箱(ART)-用于机器学习安全性的Python库-规避,中毒,提取,推理
#计算机科学#A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
#计算机科学#Interpretability and explainability of data and machine learning models
翻译 - 数据和机器学习模型的可解释性和可解释性
#计算机科学#Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty in machine learning model predictions.
#区块链#Paddle with Decentralized Trust based on Xuperchain
#计算机科学#Athena: A Framework for Defending Machine Learning Systems Against Adversarial Attacks
#计算机科学#Hands on workshop material evaluating performance, fairness and robustness of models
#计算机科学#Security protocols for estimating adversarial robustness of machine learning models for both tabular and image datasets. This package implements a set of evasion attacks based on metaheuristic optimiz...
A self-hosted, privacy-focused RAG (Retrieval-Augmented Generation) interface for intelligent document interaction. Turn any document into a knowledge base you can chat with.