Summary of Hidden in Plain Sight: Exploring Chat History Tampering in Interactive Language Models, by Cheng’an Wei et al.
Hidden in Plain Sight: Exploring Chat History Tampering in Interactive Language Models
by Cheng’an Wei, Yue Zhao, Yujia Gong, Kai Chen, Lu Xiang, Shenchen Zhu
First submitted to arxiv on: 30 May 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a systematic methodology to integrate user-supplied chat history into Large Language Model (LLM) conversations without prior knowledge of the target model. The approach utilizes prompt templates that can organize messages, allowing the LLM to interpret them as genuine chat history. To optimize these templates, the authors introduce the LLM-Guided Genetic Algorithm (LLMGA), which leverages an LLM to generate and iterate on template designs. The proposed method is applied to popular real-world LLMs like ChatGPT and Llama-2/3, showing that chat history tampering can enhance model behavior over time and influence output. For instance, it can improve disallowed response elicitation success rates up to 97% on ChatGPT. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper shows how to make large language models like ChatGPT more interactive by adding chat history into their conversations. The authors develop a new method that uses templates to organize messages and makes the model think they’re real conversations. They test this approach with popular LLMs and find it can make the models behave in new ways, such as successfully elicit responses that are not allowed. |
Keywords
» Artificial intelligence » Large language model » Llama » Prompt