Loading Now

Summary of Towards Communication-efficient Federated Learning Via Sparse and Aligned Adaptive Optimization, by Xiumei Deng et al.


Towards Communication-efficient Federated Learning via Sparse and Aligned Adaptive Optimization

by Xiumei Deng, Jun Li, Kang Wei, Long Shi, Zeihui Xiong, Ming Ding, Wen Chen, Shi Jin, H. Vincent Poor

First submitted to arxiv on: 28 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel adaptive moment estimation algorithm called FedAdam-SSM, which is designed to reduce communication overhead in federated learning by sparsifying model updates and moment estimates. The algorithm uses a shared sparse mask (SSM) to eliminate the need for separate masks for local model parameters and moment estimates. Theoretical bounds are developed on the divergence between the locally trained model and the desired centralized model, which is related to sparsification error and imbalanced data distribution. Convergence bounds are also provided for both convex and non-convex objective function settings. Experimental results show that FedAdam-SSM outperforms baselines in terms of convergence rate and test accuracy.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning allows devices to learn from shared data without sharing their own data. But it can be slow because each device sends updates to the central server, which takes time. The authors of this paper created a new algorithm called FedAdam-SSM that makes federated learning faster by reducing what needs to be sent to the server. They did this by having devices compress their updates and moment estimates before sending them. This not only speeds up the process but also reduces the amount of data needed to be transmitted, which saves time and energy.

Keywords

* Artificial intelligence  * Federated learning  * Mask  * Objective function