Summary of Pfedmoe: Data-level Personalization with Mixture Of Experts For Model-heterogeneous Personalized Federated Learning, by Liping Yi et al.
pFedMoE: Data-Level Personalization with Mixture of Experts for Model-Heterogeneous Personalized Federated Learning
by Liping Yi, Han Yu, Chao Ren, Heng Zhang, Gang Wang, Xiaoguang Liu, Xiaoxiao Li
First submitted to arxiv on: 2 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel federated learning approach, pFedMoE, is proposed to address the challenges of data and model privacy, performance, and communication costs in decentralized training. The method combines a shared homogeneous small feature extractor with local gating networks for each client’s heterogeneous large model. Local training involves personalized feature extraction using both experts, which are then fused and processed by the local header. This approach enhances local model personalization at a fine-grained data level while supporting model heterogeneity. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated learning helps devices learn together without sharing all their data. But this approach has some problems. For example, different devices might have different models or features that work better for them. To solve these issues, scientists created a new method called pFedMoE. This method lets each device use its own unique model and feature extractor while still learning from others. The result is more personalized learning with improved performance. |
Keywords
* Artificial intelligence * Feature extraction * Federated learning