Loading Now

Summary of Acquiring Clean Language Models From Backdoor Poisoned Datasets by Downscaling Frequency Space, By Zongru Wu et al.


Acquiring Clean Language Models from Backdoor Poisoned Datasets by Downscaling Frequency Space

by Zongru Wu, Zhuosheng Zhang, Pengzhou Cheng, Gongshen Liu

First submitted to arxiv on: 19 Feb 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Medium Difficulty summary: This paper investigates the reliability of language models (LMs) in natural language processing tasks, specifically focusing on mitigating backdoor attacks. Prior research has struggled to address complex backdoor attacks, but this study uses Fourier analysis to examine the learning mechanisms of backdoor LMs in frequency space. The findings reveal that backdoor mappings tend to focus on lower frequencies, leading to faster convergence. To combat this issue, the authors propose Multi-Scale Low-Rank Adaptation (MuScleLoRA), which employs radial scalings and low-rank adaptation to align gradients and prioritize learning of high-frequency clean mapping. Experimental results show that MuScleLoRA outperforms baselines, reducing average success rates of diverse backdoor attacks below 15% across multiple datasets and backbone LMs, including BERT, RoBERTa, GPT2-XL, and Llama2.
Low GrooveSquid.com (original content) Low Difficulty Summary
Low Difficulty summary: This research paper looks at how to make language models more reliable. Right now, these models can be tricked into making mistakes by adding special “backdoor” information during training. The researchers used a new way of looking at the data called Fourier analysis to understand why this happens. They found that when backdoors are added, the model tends to focus on lower-frequency parts of the data, which makes it easier for the backdoor to work. To fix this problem, they created a new method called MuScleLoRA, which helps the model learn from higher-frequency clean information instead. The results show that MuScleLoRA is more effective than other methods at preventing backdoors from working.

Keywords

» Artificial intelligence  » Bert  » Low rank adaptation  » Natural language processing