Summary of Decentralized Sporadic Federated Learning: a Unified Algorithmic Framework with Convergence Guarantees, by Shahryar Zehtabi et al.
Decentralized Sporadic Federated Learning: A Unified Algorithmic Framework with Convergence Guarantees
by Shahryar Zehtabi, Dong-Jun Han, Rohit Parasnis, Seyyedali Hosseinalipour, Christopher G. Brinton
First submitted to arxiv on: 5 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This research proposes a new methodology for Decentralized Federated Learning (DFL) called Decentralized Sporadic Federated Learning (DSpodFL). The existing DFL works have mainly focused on fixed number of local updates between model exchanges, overlooking heterogeneity and dynamics in communication and computation capabilities. DSpodFL builds on the concept of sporadicity in both local gradient and aggregation processes, capturing heterogeneous and time-varying computation/communication scenarios. The algorithmic framework models per-iteration occurrence of gradient descent at each client and exchange of models between client pairs as arbitrary indicator random variables. The paper analytically characterizes the convergence behavior for convex and non-convex models under mild assumptions on communication graph connectivity, data heterogeneity, and gradient noises. Experimental results demonstrate improved training speeds compared to baselines under various system settings. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Decentralized federated learning is a way for machines to learn together without a central server. Most existing methods assume that each machine does the same number of calculations between sharing its work with others. But what if machines have different abilities or communication patterns? This research proposes a new method called DSpodFL, which takes these differences into account. It models when each machine calculates and shares its work as random events, allowing for more flexible and realistic scenarios. The study shows that this approach can lead to faster training times and better results. |
Keywords
* Artificial intelligence * Federated learning * Gradient descent