Loading Now

Summary of Achieving Fairness Across Local and Global Models in Federated Learning, by Disha Makhija et al.


Achieving Fairness Across Local and Global Models in Federated Learning

by Disha Makhija, Xing Han, Joydeep Ghosh, Yejin Kim

First submitted to arxiv on: 24 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computers and Society (cs.CY)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers tackle the challenge of achieving fairness in Federated Learning (FL) by introducing EquiFL, a novel approach that balances local performance and fairness. EquiFL incorporates a fairness term into the local optimization objective to prevent bias from propagating across clients during collaboration. The proposed method is tested on multiple benchmarks, demonstrating its ability to strike a balance between accuracy and fairness locally at each client while achieving global fairness. Additionally, the results show that EquiFL ensures uniform performance distribution among clients, contributing to performance fairness. This approach is also applied to a real-world distributed dataset from a healthcare application, specifically in predicting treatment effects on patients across various hospital locations.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper tries to make sure that different groups of people get treated fairly when they work together to train a machine learning model. They call this idea “EquiFL” and it makes sure that each group’s data is used equally well. The scientists tested EquiFL on lots of different datasets and found that it works pretty well at making sure everyone gets the same results. This could be important in things like medicine, where doctors need to make decisions based on patient data.

Keywords

» Artificial intelligence  » Federated learning  » Machine learning  » Optimization