Loading Now

Summary of Fair Federated Learning Under Domain Skew with Local Consistency and Domain Diversity, by Yuhang Chen et al.


Fair Federated Learning under Domain Skew with Local Consistency and Domain Diversity

by Yuhang Chen, Wenke Huang, Mang Ye

First submitted to arxiv on: 26 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel framework for federated learning (FL) to tackle two fairness problems that arise when training models under domain skew. The current FL approaches are biased, leading to parameter update conflicts and model aggregation biases. The proposed method discovers a directional update consistency in Federated Learning and selectively discards unimportant parameter updates to prevent updates from clients with lower performance being overwhelmed by unimportant parameters. Additionally, the framework proposes a fair aggregation objective to prevent global model bias towards some domains. The method is generic and can be combined with other existing FL methods to enhance fairness. Comprehensive experiments on Digits and Office-Caltech demonstrate the high fairness and performance of the proposed method.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper tries to solve two big problems in federated learning when different groups have different data. Right now, these approaches are unfair and don’t work well for everyone. The new approach finds a way to make sure the important information from each group is used equally, so all models can be good at their job. This helps prevent some groups from being ignored or having an unfair advantage. The paper also shows that this new approach works really well in practice.

Keywords

* Artificial intelligence  * Federated learning