Summary of Chain Of Attack: a Semantic-driven Contextual Multi-turn Attacker For Llm, by Xikang Yang et al.
Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM
by Xikang Yang, Xuehai Tang, Songlin Hu, Jizhong Han
First submitted to arxiv on: 9 May 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Cryptography and Security (cs.CR); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel method called CoA (Chain of Attack) is presented to attack large language models (LLMs) in multi-turn dialogues, posing security and moral threats. The CoA approach adaptively adjusts its attack policy through contextual feedback and semantic relevance during the conversation, resulting in unreasonable or harmful content production by the LLM. Evaluations on various LLMs and datasets show that CoA outperforms existing methods in exposing vulnerabilities and demonstrates a new perspective for attacking and defending LLMs. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Large language models are super smart at understanding and generating human-like text. But, they can also be tricked into saying bad things or being biased if someone wants them to. This paper shows how to make these models produce weird or harmful responses by adapting the way we attack them during conversations. The new method, called CoA, is better than what’s already out there at making these models say silly things. |