Loading Now

Summary of On Admm in Heterogeneous Federated Learning: Personalization, Robustness, and Fairness, by Shengkun Zhu et al.


On ADMM in Heterogeneous Federated Learning: Personalization, Robustness, and Fairness

by Shengkun Zhu, Jinshan Zeng, Sheng Wang, Yuan Sun, Xiaodong Li, Yuan Yao, Zhiyong Peng

First submitted to arxiv on: 23 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Personalized Federated Learning (PFL) approach aims to reduce the impact of statistical heterogeneity by developing personalized models for individual users. However, existing PFL frameworks focus on improving the performance of personalized models while neglecting the global model. To address this limitation, we propose FLAME, an optimization framework that utilizes alternating direction method of multipliers (ADMM) to train both personalized and global models. Our framework also includes a model selection strategy to improve performance in situations where clients have different types of heterogeneous data. Theoretical analysis establishes global convergence and two kinds of convergence rates for FLAME under mild assumptions, demonstrating its robustness and fairness compared to state-of-the-art methods on linear problems. Experimental findings show that FLAME outperforms existing methods in terms of convergence rate and accuracy, achieving higher test accuracy under various attacks and performing more uniformly across clients.
Low GrooveSquid.com (original content) Low Difficulty Summary
Personalized federated learning (PFL) is an approach that helps reduce the impact of differences between different groups’ data when training models. This can make the model fairer and more robust. However, existing PFL methods only focus on improving individual models without considering the global model. The authors propose a new method called FLAME to address this issue. FLAME trains both personalized and global models simultaneously using an optimization framework. They also provide a way to choose the best model depending on the type of data each group has. The authors show that their method is more robust and fair than existing methods, and it works well in different scenarios.

Keywords

» Artificial intelligence  » Federated learning  » Optimization