Loading Now

Summary of Vaccine: Perturbation-aware Alignment For Large Language Models Against Harmful Fine-tuning Attack, by Tiansheng Huang et al.


Vaccine: Perturbation-aware Alignment for Large Language Models against Harmful Fine-tuning Attack

by Tiansheng Huang, Sihao Hu, Ling Liu

First submitted to arxiv on: 2 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces a new attack surface for Large Language Models (LLMs) in the finetuning-as-a-service paradigm. It is shown that a few harmful data uploaded by users can easily trick the finetuning process to produce an alignment-broken model. The authors conduct an empirical analysis and uncover a “harmful embedding drift” phenomenon, which is likely the cause of the alignment-broken effect. To mitigate this security risk, they propose Vaccine, a perturbation-aware alignment technique that produces invariant hidden embeddings by progressively adding crafted perturbations in the alignment phase. This enables the embeddings to withstand harmful perturbations from unsanitized user data during finetuning. The authors demonstrate the effectiveness of Vaccine on open-source mainstream LLMs (e.g., Llama2, Opt, Vicuna) and show that it boosts robustness against harmful prompts while preserving reasoning ability for benign prompts. The code is available at this GitHub URL.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine if you could fine-tune a language model to do what you want, but someone else uploaded bad data that makes the model act weirdly? That’s what happened in this study. Researchers found that just a few bad data points can make a language model “get confused” and start producing strange results. They looked into why this happens and discovered something called “harmful embedding drift”. To fix this, they created a new technique called Vaccine that helps the model ignore bad data and stay focused on good data. They tested it with some popular language models and showed that it works well. This is important because it can help keep our language models safe from being tricked by bad actors.

Keywords

* Artificial intelligence  * Alignment  * Embedding  * Language model