Summary of Multi-level Personalized Federated Learning on Heterogeneous and Long-tailed Data, by Rongyu Zhang et al.
Multi-level Personalized Federated Learning on Heterogeneous and Long-Tailed Data
by Rongyu Zhang, Yun Chen, Chenrui Wu, Fangxin Wang, Bo Li
First submitted to arxiv on: 10 May 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper explores the challenges of federated learning (FL) when dealing with non-i.i.d. and long-tailed class distributions across mobile applications, such as autonomous vehicles. The authors propose a novel personalized FL framework called Multi-level Personalized Federated Learning (MuPFL), which consists of three modules: Biased Activation Value Dropout (BAVD), Adaptive Cluster-based Model Update (ACMU), and Prior Knowledge-assisted Classifier Fine-tuning (PKCF). These modules aim to mitigate overfitting, refine local models, and personalize models according to skewed local data with shared knowledge. Experimental results on real-world datasets for image classification and semantic segmentation demonstrate that MuPFL outperforms state-of-the-art baselines, achieving accuracy improvements of up to 7.39% and reducing training time by up to 80%. This innovative framework enhances both efficiency and effectiveness in federated learning applications. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated learning is a way for devices to learn together without sharing their data. But when the data is different from one device to another, this can cause problems. The authors of this study want to fix these issues by creating a new way to do federated learning called MuPFL. They made three important parts to help: BAVD to stop overfitting, ACMU to make local models better, and PKCF to make the models more accurate. They tested their idea on real pictures and maps and it worked really well! It was even 7.39% better than other ways of doing things. This is important because it makes federated learning faster and more useful. |
Keywords
» Artificial intelligence » Dropout » Federated learning » Fine tuning » Image classification » Overfitting » Semantic segmentation