-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Home
Beat Buesser edited this page May 21, 2021
·
21 revisions
Welcome to the Adversarial Robustness Toolbox
Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to evaluate, defend, certify and verify Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning frameworks (TensorFlow, Keras, PyTorch, MXNet, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types (images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, speech recognition, generation, certification, etc.).
- TensorFlow (v1 and v2) (www.tensorflow.org)
- Keras (www.keras.io)
- PyTorch (www.pytorch.org)
- MXNet (https://mxnet.apache.org)
- Scikit-learn (www.scikit-learn.org)
- XGBoost (www.xgboost.ai)
- LightGBM (https://lightgbm.readthedocs.io)
- CatBoost (www.catboost.ai)
- GPy (https://sheffieldml.github.io/GPy/)