An easy-to-use Python framework to generate adversarial jailbreak prompts.
-
Updated
Sep 2, 2024 - Python
An easy-to-use Python framework to generate adversarial jailbreak prompts.
Official repository for the paper "ALERT: A Comprehensive Benchmark for Assessing Large Language Models’ Safety through Red Teaming"
Restore safety in fine-tuned language models through task arithmetic
Add a description, image, and links to the llm-safety-benchmark topic page so that developers can more easily learn about it.
To associate your repository with the llm-safety-benchmark topic, visit your repo's landing page and select "manage topics."