Summary of Optimal Batch Allocation For Wireless Federated Learning, by Jaeyoung Song et al.
Optimal Batch Allocation for Wireless Federated Learning
by Jaeyoung Song, Sang-Woon Jeon
First submitted to arxiv on: 3 Apr 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper investigates federated learning, a technique that enables global models to be trained without direct access to private data by leveraging communication between a server and local devices. It focuses on the completion time required to achieve a target performance, analyzing the number of iterations needed for federated learning to reach a specific optimality gap from a minimum global loss. The study also explores the time required for each iteration under two multiple access schemes: time-division multiple access (TDMA) and random access (RA). The authors propose an optimal step-wise batch allocation for TDMA-based systems, which significantly reduces completion times for RA-based learning systems. Numerical experiments validate these results using real-data. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated learning helps devices train a shared model without sharing their data. This paper looks at how long it takes to finish training when you want the model to be good enough. It finds that if devices take turns sending updates, it takes fewer steps than if they all send updates at once. The researchers also developed an efficient way for devices to share information, making the whole process faster. |
Keywords
* Artificial intelligence * Federated learning