Summary of Guard: Role-playing to Generate Natural-language Jailbreakings to Test Guideline Adherence Of Large Language Models, by Haibo Jin et al.
GUARD: Role-playing to Generate Natural-language Jailbreakings to Test Guideline Adherence of Large Language Models
by Haibo Jin, Ruoxi Chen, Andy Zhou, Yang Zhang, Haohan Wang
First submitted to arxiv on: 5 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computation and Language (cs.CL); Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed system, called GUARD (Guideline Upholding through Adaptive Role-play Diagnostics), aims to generate massive jailbreaks for Large Language Models (LLMs) in a novel and intuitive way. The approach involves assigning four roles to user LLMs to collaborate on new jailbreaks, leveraging existing ones by clustering frequency and semantic patterns sentence-by-sentence. This knowledge graph is then used to generate new jailbreaks that have been shown to effectively induce LLMs to produce unethical or guideline-violating responses. Additionally, GUARD pioneers a setting that automatically follows government-issued guidelines to test whether LLMs adhere to them. The system was empirically validated on three open-sourced and one commercial LLM, as well as two vision language models. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The researchers developed a new way to make sure big computer programs don’t say bad things. They created a special tool that helps other tools learn how to be good by generating “jailbreaks” (bad things) in a controlled way. This system uses people’s help to generate more jailbreaks and makes sure they’re effective at making the computer program do something it shouldn’t. The team also tested their system on several big language programs and found that it works. |
Keywords
* Artificial intelligence * Clustering * Knowledge graph