Loading Now

Summary of Flatness-aware Sequential Learning Generates Resilient Backdoors, by Hoang Pham et al.


Flatness-aware Sequential Learning Generates Resilient Backdoors

by Hoang Pham, The-Anh Ta, Anh Tran, Khoa D. Doan

First submitted to arxiv on: 20 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel framework called Sequential Backdoor Learning (SBL) to generate resilient backdoors that can resist fine-tuning defenses. This is achieved by reformulating the backdoor training process through the lens of continual learning (CL). The SBL framework consists of two tasks: the first task learns a backdoored model, while the second task uses CL principles to move it to a region resistant to fine-tuning. The authors also propose seeking flatter backdoor regions via a sharpness-aware minimizer in the framework. To demonstrate the effectiveness of their method, they conduct extensive empirical experiments on several benchmark datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
Recently, machine learning models have been vulnerable to backdoor attacks that can be difficult to detect and remove. Researchers have proposed various defenses against these attacks, but some recent methods have shown notable efficacy in removing implanted backdoors from deep neural networks. This paper investigates the relationship between a backdoored model and fine-tuned models in the loss landscape and proposes a new framework to generate resilient backdoors that can resist these defensive algorithms.

Keywords

* Artificial intelligence  * Continual learning  * Fine tuning  * Machine learning