Summary of Mitigating Backdoor Attacks in Federated Learning Via Flipping Weight Updates Of Low-activation Input Neurons, by Binbin Ding et al.
Mitigating Backdoor Attacks in Federated Learning via Flipping Weight Updates of Low-Activation Input Neurons
by Binbin Ding, Penghui Yang, Zeqing Ge, Shengjun Huang
First submitted to arxiv on: 16 Aug 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed paper introduces FLAIN (Flipping Weight Updates of Low-Activation Input Neurons), a method designed to defend against backdoor attacks in federated learning. By leveraging the understanding that malicious clients introduce specific neurons that remain dormant when processing clean data, FLAIN identifies low-activation input neurons and flips their associated weight updates after global training. This process is iteratively repeated until the performance degradation on an auxiliary dataset becomes unacceptable. The method shows promise in reducing backdoor attack success rates to a low level in various scenarios, including non-IID data distribution or high MCRs, with minimal impact on clean data performance. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated learning allows different devices to work together to train AI models while keeping their own data private. But this can also be used by bad actors to secretly change the model’s behavior. The new method called FLAIN helps prevent these attacks by identifying and fixing the “dormant” parts of the model that are only triggered when it sees certain types of data. This is done by using an extra set of training data that makes the model worse if it’s compromised, so you can be sure it won’t be affected by the bad guys. |
Keywords
» Artificial intelligence » Federated learning