Loading Now

Summary of Emerging Safety Attack and Defense in Federated Instruction Tuning Of Large Language Models, by Rui Ye et al.


Emerging Safety Attack and Defense in Federated Instruction Tuning of Large Language Models

by Rui Ye, Jingyi Chai, Xiangrui Liu, Yaodong Yang, Yanfeng Wang, Siheng Chen

First submitted to arxiv on: 15 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR); Multiagent Systems (cs.MA)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the vulnerability of Federated Instruction Tuning (FedIT) in large language models (LLMs). Specifically, it proposes a stealthy attack method that can compromise the safety alignment of LLMs trained via FedIT. The attack method involves generating malicious data without manual effort and training local LLMs on this data. The proposed attack successfully reduces the safety rate by 70% and cannot be effectively defended against by many existing FL defense methods. To address this, the paper also proposes a post-hoc defense method that relies on a fully automated pipeline to generate defense data and fine-tune the LLM. Experiment results show that the attack can significantly compromise the LLM’s safety alignment while the proposed defense method can enhance its safety alignment by up to 69%. The paper highlights the importance of addressing this vulnerability in FL-based LLMs.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research explores a new way for computers to learn from many different sources without sharing their data. They create a model that can be fine-tuned based on preferences and safety rules. However, they found that someone could intentionally sabotage the system by creating fake data and training the model on it. This attack is hard to detect and prevent using current methods. To fix this issue, the researchers developed a new way to defend against these attacks. They tested their ideas and showed that their defense method can improve the safety of the model by up to 69%.

Keywords

» Artificial intelligence  » Alignment  » Instruction tuning