Summary of Semantic Mirror Jailbreak: Genetic Algorithm Based Jailbreak Prompts Against Open-source Llms, by Xiaoxia Li et al.
Semantic Mirror Jailbreak: Genetic Algorithm Based Jailbreak Prompts Against Open-source LLMs
by Xiaoxia Li, Siyuan Liang, Jiyi Zhang, Han Fang, Aishan Liu, Ee-Chien Chang
First submitted to arxiv on: 21 Feb 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Neural and Evolutionary Computing (cs.NE)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Large Language Models (LLMs) are powerful tools for creative writing, code generation, and translation, but they can be vulnerable to “jailbreak attacks” where crafted prompts induce harmful outputs. To combat this issue, researchers have developed various methods to create jailbreak prompts, which typically involve combining templates with questions. However, these existing designs often suffer from excessive semantic differences, making it difficult for defenses that rely on simple semantic metrics to resist the attacks. In response, we propose a new approach called Semantic Mirror Jailbreak (SMJ) that generates jailbreak prompts that are semantically similar to the original question. We model the search for effective jailbreak prompts as a multi-objective optimization problem and employ genetic algorithms to generate eligible prompts. Our results show that SMJ achieves higher attack success rates than existing methods, particularly when combined with defenses that use semantic metrics as thresholds. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Imagine you’re using a powerful language model to write or translate text, but someone can manipulate it to produce harmful output. This is called a “jailbreak attack.” Researchers have been trying to find ways to make these attacks harder by creating special prompts that can trick the language model. However, they’ve realized that these prompts often look very different from the original question used to ask for text, making them easy to detect. In this paper, we introduce a new approach called Semantic Mirror Jailbreak (SMJ) that creates prompts that are similar to the original question. We use special algorithms to search for these prompts and test how well they work. Our results show that SMJ is better than existing methods at creating effective jailbreak prompts. |
Keywords
» Artificial intelligence » Language model » Optimization » Translation