Summary of Keeping Llms Aligned After Fine-tuning: the Crucial Role Of Prompt Templates, by Kaifeng Lyu et al.
Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates
by Kaifeng Lyu, Haoyu Zhao, Xinran Gu, Dingli Yu, Anirudh Goyal, Sanjeev Arora
First submitted to arxiv on: 28 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This research paper proposes a new approach to mitigate the loss of alignment in public language models when fine-tuned for specific tasks. The study finds that prompt templates used during fine-tuning and inference play a crucial role in preserving safety alignment, and introduces the “Pure Tuning, Safe Testing” (PTST) strategy, which involves fine-tuning without a safety prompt but including it at test time to encourage alignment preservation. The PTST approach is evaluated on several chat models, including Meta’s Llama 2-Chat, Mistral AI’s Mistral 7B Instruct v0.2, and OpenAI’s GPT-3.5 Turbo, using datasets such as GSM8K, ChatDoctor, and OpenOrca. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us understand how to keep public language models safe when they’re used for specific tasks. The researchers found that the way we fine-tune these models matters, and they came up with a new method called “Pure Tuning, Safe Testing” (PTST). PTST is like a safety net that helps keep the model aligned with what’s considered safe. They tested this approach on several chat models and showed it can reduce unsafe behaviors. |
Keywords
* Artificial intelligence * Alignment * Fine tuning * Gpt * Inference * Llama * Prompt