Loading Now

Summary of Fedimpro: Measuring and Improving Client Update in Federated Learning, by Zhenheng Tang et al.


FedImpro: Measuring and Improving Client Update in Federated Learning

by Zhenheng Tang, Yonggang Zhang, Shaohuai Shi, Xinmei Tian, Tongliang Liu, Bo Han, Xiaowen Chu

First submitted to arxiv on: 10 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach to mitigate client drift in Federated Learning (FL) models, which occurs when clients have heterogeneous data distributions. The authors analyze the generalization contribution of local training and find that it is bounded by the conditional Wasserstein distance between client data distributions. They then introduce FedImpro, a method that decouples the model into high-level and low-level components, trains the high-level portion on reconstructed feature distributions, and enhances the generalization contribution and reduces gradient dissimilarity in FL. Experimental results show that FedImpro can improve the generalization performance of FL models.
Low GrooveSquid.com (original content) Low Difficulty Summary
In this paper, scientists try to make a special kind of artificial intelligence called Federated Learning work better. They find that sometimes, when different devices or computers are used for training, the data they use is not the same. This makes it harder for the AI to learn from all the data together. The authors have an idea to solve this problem by making sure the data is more similar on each device. They call their solution FedImpro and test it with some examples. It seems to work well!

Keywords

* Artificial intelligence  * Federated learning  * Generalization