Summary of Flash: Federated Learning Across Simultaneous Heterogeneities, by Xiangyu Chang et al.
FLASH: Federated Learning Across Simultaneous Heterogeneities
by Xiangyu Chang, Sk Miraj Ahmed, Srikanth V. Krishnamurthy, Basak Guler, Ananthram Swami, Samet Oymak, Amit K. Roy-Chowdhury
First submitted to arxiv on: 13 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This research proposes a novel approach to federated learning (FL), which trains machine learning models across diverse data sources without sharing local data. The key challenge in FL is client heterogeneity, arising from variations in data distribution, quality, and compute/communication latency. To address this issue, the authors introduce FLASH (Federated Learning Across Simultaneous Heterogeneities), a lightweight client selection algorithm that balances statistical information from client data quality, distribution, and latency. FLASH models learning dynamics through contextual multi-armed bandits (CMAB) and dynamically selects the most promising clients. Experimental results demonstrate FLASH’s substantial improvements over state-of-the-art baselines, with up to 10% absolute accuracy gains. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated learning is a way for many devices to work together on machine learning tasks without sharing their own data. Right now, it can be tricky because each device has different kinds of data and different ways of processing information. The researchers developed a new method called FLASH that helps with this challenge. It’s like a special algorithm that picks the best devices to use for training, based on how good their data is and how quickly they can process it. By doing things this way, FLASH can get better results than other methods, even when there are lots of differences between devices. |
Keywords
* Artificial intelligence * Federated learning * Machine learning