Skip to content
View zhaoxu98's full-sized avatar

Highlights

  • Pro

Block or report zhaoxu98

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
zhaoxu98/README.md

🍊 About Me | 关于我

🍵 Skills | 技能

🍨 Others | 其他

Pinned Loading

  1. usail-hkust/LLMTSCS usail-hkust/LLMTSCS Public

    Official code for article "LLMLight: Large Language Models as Traffic Signal Control Agents".

    Python 166 20

  2. ThuCCSLab/Awesome-LM-SSP ThuCCSLab/Awesome-LM-SSP Public

    A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).

    947 62

  3. usail-hkust/JailTrickBench usail-hkust/JailTrickBench Public

    Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)

    Python 84 8

  4. usail-hkust/Awesome-Urban-Foundation-Models usail-hkust/Awesome-Urban-Foundation-Models Public

    An Awesome Collection of Urban Foundation Models (UFMs).

    135 12

  5. SheltonLiu-N/AutoDAN SheltonLiu-N/AutoDAN Public

    The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models".

    Python 244 41

  6. usail-hkust/Jailjudge usail-hkust/Jailjudge Public

    JAILJUDGE: A comprehensive evaluation benchmark which includes a wide range of risk scenarios with complex malicious prompts (e.g., synthetic, adversarial, in-the-wild, and multi-language scenarios…

    Python 23