Summary of How Johnny Can Persuade Llms to Jailbreak Them: Rethinking Persuasion to Challenge Ai Safety by Humanizing Llms, By Yi Zeng et al.
How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to Challenge AI Safety by Humanizing LLMs
by Yi Zeng, Hongpeng Lin, Jingwen Zhang, Diyi Yang, Ruoxi Jia, Weiyan Shi
First submitted to arxiv on: 12 Jan 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed research introduces a new perspective on AI safety by considering large language models (LLMs) as human-like communicators. The study focuses on persuading LLMs to “jailbreak” them, leveraging decades of social science research to develop a persuasion taxonomy. This taxonomy is used to automatically generate interpretable persuasive adversarial prompts (PAP) that significantly increase the jailbreak performance across various risk categories. Specifically, the results show an attack success rate of over 92% on Llama 2-7b Chat, GPT-3.5, and GPT-4 in 10 trials, outperforming recent algorithm-focused attacks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research explores how to persuade large language models (LLMs) to “jailbreak” them, which is a new way of thinking about AI safety. Imagine you’re trying to convince someone to do something – that’s basically what this study does with LLMs! The researchers used old ideas from social science to figure out how to make the LLMs want to help them. They then used these ideas to create special messages that could trick the LLMs into doing what they wanted. Surprisingly, it worked really well – almost all of the time! |
Keywords
» Artificial intelligence » Gpt » Llama