Loading Now

Summary of Booster: Tackling Harmful Fine-tuning For Large Language Models Via Attenuating Harmful Perturbation, by Tiansheng Huang et al.


Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturbation

by Tiansheng Huang, Sihao Hu, Fatih Ilhan, Selim Furkan Tekin, Ling Liu

First submitted to arxiv on: 3 Sep 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses a crucial security concern in large language models’ fine-tuning-as-a-service. The authors investigate the “harmful fine-tuning attack,” which poses serious safety risks, and propose an innovative solution called Booster to mitigate its impact. The problem arises when existing defenses are ineffective, and the root cause remains unclear. To tackle this issue, the paper suggests that “harmful perturbation over model weights” might be a primary cause of alignment-broken models. The proposed solution, Booster, incorporates a loss regularizer in the alignment stage’s optimization to ensure the model’s harmful loss reduction is reduced after simulated harmful perturbations. Empirical results demonstrate that Booster effectively reduces the fine-tuned models’ harmful scores while preserving their performance on downstream tasks. This work contributes to the development of safer and more reliable large language models.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research paper looks at a problem in how we make big language models better. Right now, there’s a risk that someone could hack into these models and make them do bad things. The authors want to find a way to stop this from happening. They think the key is to look at how the model’s weights are changed when it’s fine-tuned. To fix this issue, they came up with an idea called Booster. It helps keep the model safe by making sure it doesn’t get too far off track when it’s being made better. The results show that this method works well and keeps the models from getting hacked.

Keywords

» Artificial intelligence  » Alignment  » Fine tuning  » Optimization