Summary of Learning to Poison Large Language Models During Instruction Tuning, by Yao Qiang and Xiangyu Zhou and Saleh Zare Zade and Mohammad Amin Roshani and Prashant Khanduri and Douglas Zytko and Dongxiao Zhu
Learning to Poison Large Language Models During Instruction Tuning
by Yao Qiang, Xiangyu Zhou, Saleh Zare Zade, Mohammad Amin Roshani, Prashant Khanduri, Douglas Zytko, Dongxiao Zhu
First submitted to arxiv on: 21 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computation and Language (cs.CL); Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed research addresses a critical vulnerability in Large Language Models (LLMs) by designing a novel data poisoning attack that exploits the instruction tuning process. The attack, called gradient-guided backdoor trigger learning (GBTL), enables adversaries to manipulate outputs for malicious purposes while evading conventional defenses. Experimental results demonstrate a high success rate in compromising various LLMs’ outputs across tasks like sentiment analysis and question answering. To mitigate these risks, two defense strategies are proposed: in-context learning (ICL) and continuous learning (CL). These defenses effectively rectify the behavior of LLMs and significantly reduce performance decline. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper explores a new threat to Large Language Models (LLMs), which can be tricked into producing false outputs by inserting backdoor triggers into their training data. The researchers created a special kind of attack that takes advantage of how LLMs learn from instructions. They tested this attack on several different tasks and found that it was very successful in causing the models to produce wrong answers. To fix this problem, the authors suggest two ways to make LLMs more resistant to these attacks: teaching them to learn in specific contexts or allowing them to continuously improve. |
Keywords
* Artificial intelligence * Instruction tuning * Question answering