Loading Now

Summary of Fedmoe-da: Federated Mixture Of Experts Via Domain Aware Fine-grained Aggregation, by Ziwei Zhan et al.


FedMoE-DA: Federated Mixture of Experts via Domain Aware Fine-grained Aggregation

by Ziwei Zhan, Wenkuan Zhao, Yuanqing Li, Weijie Liu, Xiaoxi Zhang, Chee Wei Tan, Chuan Wu, Deke Guo, Xu Chen

First submitted to arxiv on: 4 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Federated learning (FL) enables multiple clients to train models without sharing private data. However, large-scale models’ exceptional performance is limited by constrained computational and communication resources. The Mixture of Experts (MoE) architecture addresses this challenge with its sparse activation property, reducing workload and demands during inference and updates. MoE also facilitates better personalization by allowing experts to specialize in different subsets of the data distribution. To alleviate server-client transmissions, we propose FedMoE-DA, a new FL model training framework that leverages MoE and incorporates a novel domain-aware aggregation strategy. This framework exploits intra-client expert models’ correlation and inter-client data heterogeneity. We also utilize peer-to-peer (P2P) communication for selective expert model synchronization, significantly reducing server transmissions. Our FedMoE-DA achieves excellent performance while reducing communication pressure on the server.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning is a way to train models without sharing private data. Big models are powerful but require lots of computing power and communication. A new architecture called MoE helps by using less energy during training and testing, making it more personalizable. To make communication between the server and clients more efficient, we created a framework called FedMoE-DA that combines MoE with a special way of combining data from different sources. This makes the model work better while reducing the amount of data sent back and forth. Our new approach does a great job of balancing performance and communication efficiency.

Keywords

» Artificial intelligence  » Federated learning  » Inference  » Mixture of experts