Summary of Locally Estimated Global Perturbations Are Better Than Local Perturbations For Federated Sharpness-aware Minimization, by Ziqing Fan et al.
Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization
by Ziqing Fan, Shengchao Hu, Jiangchao Yao, Gang Niu, Ya Zhang, Masashi Sugiyama, Yanfeng Wang
First submitted to arxiv on: 29 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed algorithm, FedLESAM, addresses the challenges in federated learning (FL) by locally estimating the direction of global perturbation on client-side. This novel approach builds upon sharpness-aware minimization (SAM), a prevalent method for mitigating performance degradation due to data heterogeneity and multi-step updates. By calculating perturbations based on client data and minimizing local sharpness, FedLESAM achieves improved quality and efficiency compared to original SAM-based methods. Theoretical analysis provides a slightly tighter bound than the original FedSAM algorithm, while empirical experiments on four benchmark datasets demonstrate the superior performance and speed of FedLESAM. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary FedLESAM is an innovative solution for federated learning that helps overcome challenges in data heterogeneity and multi-step updates. In traditional federated learning approaches, using sharpness-aware minimization (SAM) can lead to poor performance. The new algorithm, FedLESAM, estimates the direction of global perturbation on each client’s side, improving quality and speed. This makes it a more effective way to train models compared to previous methods. |
Keywords
» Artificial intelligence » Federated learning » Sam