Loading Now

Summary of Language Models Are Homer Simpson! Safety Re-alignment Of Fine-tuned Language Models Through Task Arithmetic, by Rishabh Bhardwaj et al.


Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic

by Rishabh Bhardwaj, Do Duc Anh, Soujanya Poria

First submitted to arxiv on: 19 Feb 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed RESTA method tackles the limitation of fine-tuned language models by restoring their safety through simple arithmetic operations. The approach, dubbed REstoring Safety through Task Arithmetic, adds a safety vector to the compromised model’s weights. This method demonstrates effectiveness in both parameter-efficient and full fine-tuning, covering tasks such as instruction following in multiple languages and problem-solving in Code and Math. RESTA also showcases generalizability on existing safety evaluation benchmarks and a novel multilingual benchmark dataset. By applying RESTA, the harmfulness of compromised models decreases significantly while maintaining performance. The authors release their source codes at this URL.
Low GrooveSquid.com (original content) Low Difficulty Summary
RESTA is a new way to make language models safer by adding a special kind of math to the model’s weights. This makes the model less likely to produce harmful or offensive text. The method works well in different situations, such as teaching languages like Chinese, English, and Hindi, or solving math problems. RESTA even works on existing tests that check for safety and on a new test with many questions about harm. By using RESTA, the authors were able to make compromised models much safer while still keeping them good at their original tasks.

Keywords

» Artificial intelligence  » Fine tuning  » Parameter efficient