Summary of Wassffed: Wasserstein Fair Federated Learning, by Zhongxuan Han et al.
WassFFed: Wasserstein Fair Federated Learning
by Zhongxuan Han, Li Zhang, Chaochao Chen, Xiaolin Zheng, Fei Zheng, Yuyuan Li, Jianwei Yin
First submitted to arxiv on: 11 Nov 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a new framework for Federated Learning (FL) called Wasserstein Fair Federated Learning (WassFFed). The goal is to achieve fairness in FL when users’ data cannot be shared across clients. Existing research on fairness assumes access to the entire training data, making direct transfer to FL challenging. WassFFed addresses two key challenges: ensuring fair optimization results match fair classification results and aggregating local models yields a globally fair model despite non-IID data distributions among clients. The framework uses Wasserstein barycenter calculation of local models’ outputs for each user group, bringing consistency between the global model and local models. Experimental results on three real-world datasets show that WassFFed outperforms existing approaches in balancing accuracy and fairness. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making a new kind of machine learning called Federated Learning work fairly when different groups of people don’t share their data with each other. Right now, researchers are working to make this happen, but they’re assuming that all the data can be shared, which isn’t always true. The new framework proposed in this paper solves two big problems: making sure the model is fair and matching the results from different groups of people. The team uses a special math technique called Wasserstein barycenter calculation to make sure everything works together smoothly. They tested their idea on three real-world datasets and showed that it works better than previous attempts at fairness. |
Keywords
* Artificial intelligence * Classification * Federated learning * Machine learning * Optimization