Summary of Achieving Byzantine-resilient Federated Learning Via Layer-adaptive Sparsified Model Aggregation, by Jiahao Xu et al.
Achieving Byzantine-Resilient Federated Learning via Layer-Adaptive Sparsified Model Aggregation
by Jiahao Xu, Zikai Zhang, Rui Hu
First submitted to arxiv on: 2 Sep 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR); Distributed, Parallel, and Cluster Computing (cs.DC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed Layer-Adaptive Sparsified Model Aggregation (LASA) approach aims to improve the robustness of Federated Learning (FL) against Byzantine attacks by introducing a pre-aggregation sparsification module and a layer-wise adaptive filter. This technique combines magnitude and direction metrics across all clients to selectively aggregate benign layers, reducing the impact of malicious parameters. Theoretical analysis and experimental results on various datasets demonstrate the effectiveness of LASA in improving robustness performance, particularly in non-IID settings. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated Learning helps many devices learn together without sharing their data. But someone can trick the system by sending fake model updates to disrupt the learning process. Currently, there are defense methods that try to fix this problem, but they’re not very effective. To make it better, researchers propose a new approach called Layer-Adaptive Sparsified Model Aggregation (LASA). LASA uses two main parts: one that makes updates more sparse and efficient before combining them, and another that chooses the best layers based on magnitude and direction from all devices. They tested this method on different datasets and showed it can improve robustness against attacks. |
Keywords
» Artificial intelligence » Federated learning