Summary of Dapperfl: Domain Adaptive Federated Learning with Model Fusion Pruning For Edge Devices, by Yongzhe Jia et al.
DapperFL: Domain Adaptive Federated Learning with Model Fusion Pruning for Edge Devices
by Yongzhe Jia, Xuyun Zhang, Hongsheng Hu, Kim-Kwang Raymond Choo, Lianyong Qi, Xiaolong Xu, Amin Beheshti, Wanchun Dou
First submitted to arxiv on: 8 Dec 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Federated learning (FL) is a prominent machine learning paradigm in edge computing environments, enabling edge devices to collaborate on optimizing a global model without sharing private data. However, existing FL frameworks suffer from efficacy deterioration due to system heterogeneity, particularly when domain shifts occur across local data. To address this challenge, we propose DapperFL, a heterogeneous FL framework that produces personalized compact local models using Model Fusion Pruning (MFP) and Domain Adaptive Regularization (DAR). The MFP module prunes local models with fused knowledge from both local and remaining domains to ensure robustness to domain shifts. The DAR module employs regularization generated by the pruned model to learn robust representations across domains. We also introduce a specific aggregation algorithm for aggregating heterogeneous local models with tailored architectures and weights. Experimental results on benchmark datasets demonstrate that DapperFL outperforms state-of-the-art FL frameworks by up to 2.28% while achieving significant model volume reductions (20-80%). Our code is available at: https://github.com/jyzgh/DapperFL. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Imagine a world where devices can work together without sharing their private data. This is called federated learning, and it’s very useful for edge computing. However, when different devices have different types of data, it gets harder to make this work well. To fix this problem, we created DapperFL, a new way to do federated learning that can handle different types of data. We used two special techniques: Model Fusion Pruning and Domain Adaptive Regularization. These help devices learn from each other’s data without sharing their own. We tested our method on some real-world datasets and it worked much better than previous methods. Our code is available online if you want to try it out. |
Keywords
» Artificial intelligence » Federated learning » Machine learning » Pruning » Regularization