Loading Now

Summary of Praffl: a Preference-aware Scheme in Fair Federated Learning, by Rongguang Ye et al.


PraFFL: A Preference-Aware Scheme in Fair Federated Learning

by Rongguang Ye, Wei-Bin Kou, Ming Tang

First submitted to arxiv on: 13 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computers and Society (cs.CY); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses the critical issue of fairness in federated learning, aiming to develop unbiased models among diverse groups with sensitive features. The authors recognize a trade-off between model performance and fairness, where improving fairness decreases performance. Current approaches characterize this trade-off by introducing hyperparameters for client preferences, but are limited to single-preference scenarios. The proposed Preference-aware scheme in Fair Federated Learning (PraFFL) generates preference-specific models in real-time, adapting to each client’s needs. PraFFL theoretically proves optimal model generation for arbitrary client preferences and achieves linear convergence. Experimental results show PraFFL outperforms six fair federated learning algorithms in adapting to clients’ diverse preferences.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making sure that a computer program doesn’t unfairly favor one group over another. Imagine you’re trying to create a model that can recognize pictures of cats and dogs, but the data comes from people who are male or female. You want the model to be fair and not biased towards one gender. The problem is that if you make the model too fair, it might not work as well as it could. Right now, there’s no good way to balance fairness and performance. This paper proposes a new approach called PraFFL that can create models tailored to each person’s preferences. It does this by adjusting the model in real-time based on what each person wants. The authors show that their approach works better than six other fair federated learning algorithms.

Keywords

* Artificial intelligence  * Federated learning