Summary of Fedmef: Towards Memory-efficient Federated Dynamic Pruning, by Hong Huang et al.
FedMef: Towards Memory-efficient Federated Dynamic Pruning
by Hong Huang, Weiming Zhuang, Chen Chen, Lingjuan Lyu
First submitted to arxiv on: 21 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes FedMef, a novel federated learning (FL) framework that tackles the challenges of training deep neural networks on resource-constrained devices. FL prioritizes data confidentiality by decentralizing training, but this approach is hindered by high computation and memory demands. The authors introduce budget-aware extrusion to maintain pruning efficiency while preserving post-pruning performance, and scaled activation pruning to reduce activation memory footprints. These innovations enable FedMef to achieve a significant 28.5% reduction in memory footprint compared to state-of-the-art methods while maintaining superior accuracy. This paper contributes to the development of FL for resource-constrained devices, with applications in areas such as IoT, autonomous vehicles, and smart homes. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The researchers have developed a new way to train artificial intelligence models on devices that don’t have much power or memory. This is important because many devices, like smartphones or smart home appliances, can’t handle the complex computations required by these models. The new approach, called FedMef, uses special techniques to make the models more efficient and reduce the need for powerful computers. This means that AI models can be trained on a wider range of devices, which has the potential to improve many areas of our lives, from healthcare to transportation. |
Keywords
* Artificial intelligence * Federated learning * Pruning