Summary of Identify Backdoored Model in Federated Learning Via Individual Unlearning, by Jiahao Xu et al.
Identify Backdoored Model in Federated Learning via Individual Unlearning
by Jiahao Xu, Zikai Zhang, Rui Hu
First submitted to arxiv on: 1 Nov 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR); Distributed, Parallel, and Cluster Computing (cs.DC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In Federated Learning, backdoor attacks pose a significant threat to robustness due to their stealth and effectiveness. Malicious models appear statistically similar to benign ones, evading detection by existing defense methods. We propose MASA, a method that utilizes individual unlearning on local models to identify malicious models. To improve performance in challenging non-IID settings, we design pre-unlearning model fusion to mitigate divergence in unlearning behaviors. Additionally, we propose an anomaly detection metric with minimal hyperparameters for efficient filtering. Our experiments validate the effectiveness of MASA across six different attacks on IID and non-IID datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Backdoor attacks are a sneaky way that hackers can make artificial intelligence models do what they want without anyone realizing it. In Federated Learning, this is especially problematic because many devices are connected to share information. We’ve developed a new method called MASA to help identify these malicious models and prevent them from causing harm. It uses local models to figure out which ones are bad and then filters them out. This helps keep the AI models honest and safe. |
Keywords
» Artificial intelligence » Anomaly detection » Federated learning