Summary of Adversarially Guided Stateful Defense Against Backdoor Attacks in Federated Deep Learning, by Hassan Ali et al.
Adversarially Guided Stateful Defense Against Backdoor Attacks in Federated Deep Learning
by Hassan Ali, Surya Nepal, Salil S. Kanhere, Sanjay Jha
First submitted to arxiv on: 15 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR); Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper presents an innovative defense mechanism against backdoor attacks in Federated Learning (FL) scenarios, which have been shown to be vulnerable to such attacks. The proposed Adversarially Guided Stateful Defense (AGSD) is designed to address the limitations of existing defenses that rely on unrealistic assumptions about client submissions and sampled clients. AGSD employs a novel metric called the trust index, computed using adversarial perturbations on a small held-out dataset, to guide cluster selection. Additionally, AGSD maintains a trust state history for each client, adaptively penalizing backdoored clients and rewarding clean ones. The results show that AGSD outperforms state-of-the-art (SOTA) defenses in realistic FL settings, with minimal drop in clean accuracy (5% worst-case compared to best accuracy). |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Backdoor attacks on Federated Learning make it hard for devices to work together safely. Researchers have shown that these attacks can be super effective against current defenses. The proposed AGSD is a new way to protect DNNs from backdoor attacks in FL scenarios. It uses a special metric called the trust index and keeps track of each client’s history to make smart decisions about what data to use. The results are impressive, showing that AGSD can work well even with limited data or no held-out dataset. |
Keywords
» Artificial intelligence » Federated learning