Summary of Overcome Modal Bias in Multi-modal Federated Learning Via Balanced Modality Selection, by Yunfeng Fan et al.
Overcome Modal Bias in Multi-modal Federated Learning via Balanced Modality Selection
by Yunfeng Fan, Wenchao Xu, Haozhao Wang, Fushuo Huo, Jinyu Chen, Song Guo
First submitted to arxiv on: 31 Dec 2023
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper addresses the challenge of selecting proper clients in federated learning (FL) when dealing with multi-modal data. Existing methods focus on uni-modal data and may not be effective in multi-modal FL due to modality imbalance, which leads to global modality-level bias. The authors empirically show that local training with a single modality can contribute more to the global model than training with all modalities. To overcome this bias, they propose the Balanced Modality Selection framework for MFL (BMSFed), which includes a modal enhancement loss and modality selection aiming at diversity and global balance. The authors conduct extensive experiments on audio-visual, colored-gray, and front-back datasets, showcasing the superiority of BMSFed over baselines in exploiting multi-modal data. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper solves a problem that happens when we try to combine different types of data together in machine learning. When we have many different kinds of data, it’s hard to decide which parts are most important and should be used in our model. The authors found out that even though we think using all the data is best, sometimes just using one type can actually make a better model. They created a new way to choose the right parts of the data called Balanced Modality Selection (BMSFed). This helps us get rid of the problem and makes our models work better with different types of data. |
Keywords
* Artificial intelligence * Federated learning * Machine learning * Multi modal