Summary of Heterogeneity-aware Coordination For Federated Learning Via Stitching Pre-trained Blocks, by Shichen Zhan et al.
Heterogeneity-Aware Coordination for Federated Learning via Stitching Pre-trained blocks
by Shichen Zhan, Yebo Wu, Chunlin Tian, Yan Zhao, Li Li
First submitted to arxiv on: 11 Sep 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes FedStitch, a hierarchical coordination framework for heterogeneous federated learning that addresses the limitations of traditional approaches. Federated learning (FL) coordinates multiple devices to collaboratively train a shared model while preserving data privacy. However, existing methods have large memory footprint and high energy consumption during training, excluding low-end devices from contributing their own data. FedStitch composes the global model via stitching pre-trained blocks, selecting suitable blocks based on local data and aggregating them at the server side. The framework consists of three core components: RL-weighted aggregator, search space optimizer, and local energy optimizer. The results demonstrate that FedStitch improves model accuracy up to 20.93%, achieves a speedup of up to 8.12%, reduces memory footprint by up to 79.5%, and achieves 89.41% energy saving during the learning procedure. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated learning helps devices share knowledge while keeping their data private. But existing methods are slow, use too much memory, and waste energy. This new approach, FedStitch, makes it faster, more efficient, and better for low-end devices to join in. It works by stitching together pre-trained blocks of information from different sources. The framework has three main parts: one helps pick the right block, another reduces the options, and the last saves energy while training. |
Keywords
» Artificial intelligence » Federated learning