Loading Now

Summary of Fedmoe: Personalized Federated Learning Via Heterogeneous Mixture Of Experts, by Hanzi Mei et al.


FedMoE: Personalized Federated Learning via Heterogeneous Mixture of Experts

by Hanzi Mei, Dongqi Cai, Ao Zhou, Shangguang Wang, Mengwei Xu

First submitted to arxiv on: 21 Aug 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes an efficient personalized Federated Learning (FL) framework called FedMoE, which addresses data heterogeneity in edge devices by using a sparsely-activated Mixture-of-Experts (MoE) architecture. FedMoE replaces traditional dense models with parallel feed-forward networks, enabling greater flexibility and adaptability to diverse task types. The method consists of two fine-tuning stages: heuristic search based on observed activation patterns for identifying suboptimal submodels for each client, followed by further training and server aggregation through a novel modular strategy. Global expert recommendation is also employed to progressively adjust the submodels to optimal. Experimental results show that FedMoE outperforms previous personalized FL methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper makes AI models better for use on lots of devices with different kinds of data. This is important because these devices often don’t have a lot of power or memory, so they need special help to learn from the data they collect. The authors create a new way called FedMoE that uses a type of model called Mixture-of-Experts (MoE) to make it work better. MoE is like a team of smaller models that can work together on different tasks. The authors test their method and show that it’s better than previous ways of doing this.

Keywords

» Artificial intelligence  » Federated learning  » Fine tuning  » Mixture of experts