Loading Now

Summary of Communication-efficient Multimodal Federated Learning: Joint Modality and Client Selection, by Liangqi Yuan et al.


Communication-Efficient Multimodal Federated Learning: Joint Modality and Client Selection

by Liangqi Yuan, Dong-Jun Han, Su Wang, Devesh Upadhyay, Christopher G. Brinton

First submitted to arxiv on: 30 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes multimodal federated learning with joint Modality and Client selection (mmFedMC), a new methodology that tackles challenges in multimodal settings. In heterogeneous networks, clients collect measurements across multiple modalities, making model training challenging. The proposed method incorporates modality selection for each client based on Shapley value analysis, modality model size, and recency to enhance generalizability. Additionally, the client selection strategy for the server is based on local loss of modality models at each client. The paper demonstrates the effectiveness of mmFedMC on five real-world datasets, achieving comparable accuracy to baselines while reducing communication overhead by over 20x.
Low GrooveSquid.com (original content) Low Difficulty Summary
Multimodal federated learning aims to improve model training in settings where clients collect data across multiple types (like images and audio). However, this is challenging when different clients have different types of data and can’t upload all their models to the server. A new method called mmFedMC helps solve these problems by selecting which types of data each client should focus on and which clients are most important for the server to learn from.

Keywords

* Artificial intelligence  * Federated learning