Loading Now

Summary of Glocalfair: Jointly Improving Global and Local Group Fairness in Federated Learning, by Syed Irfan Ali Meerza et al.


GLOCALFAIR: Jointly Improving Global and Local Group Fairness in Federated Learning

by Syed Irfan Ali Meerza, Luyang Liu, Jiaxin Zhang, Jian Liu

First submitted to arxiv on: 7 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computers and Society (cs.CY)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Federated learning (FL) is a method for training models without sharing private data. However, FL models often exhibit bias against certain groups due to data heterogeneity and party selection. Unlike centralized learning, mitigating bias in FL is challenging because private datasets are not directly accessible. Most research focuses on global fairness while overlooking local fairness. Existing methods require sharing sensitive information about client datasets, which is undesirable. To address these issues, we propose GLOCALFAIR, a framework that jointly improves global and local group fairness in FL without requiring sensitive statistics about client datasets. We use constrained optimization to enforce local fairness and a fairness-aware clustering-based aggregation on the server to ensure global model fairness across different groups while maintaining high utility. Our experiments show that GLOCALFAIR achieves enhanced fairness under both global and local data distributions while maintaining good utility and client fairness.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning is a way for computers to learn together without sharing personal information. But, sometimes the models they create can be unfair to certain groups of people. This problem is harder to solve than similar problems in centralized learning because we can’t access the private data from each computer. Most research focuses on making sure the overall model is fair, but not whether individual computers are treated fairly. We propose a new way to make both global and local fairness work together without sharing sensitive information about each computer’s data. Our method uses special rules to ensure fairness at each computer and a special way to combine their results to ensure global fairness. Our tests show that our method can make models fairer while still being useful.

Keywords

* Artificial intelligence  * Clustering  * Federated learning  * Optimization