Summary of The Power Of Bias: Optimizing Client Selection in Federated Learning with Heterogeneous Differential Privacy, by Jiating Ma et al.
The Power of Bias: Optimizing Client Selection in Federated Learning with Heterogeneous Differential Privacy
by Jiating Ma, Yipeng Zhou, Qi Li, Quan Z. Sheng, Laizhong Cui, Jiangchuan Liu
First submitted to arxiv on: 16 Aug 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed differentially private federated learning (DPFL) paradigm enhances data privacy in federated learning by incorporating DP noises to obfuscate gradients. A critical problem in DPFL is the heterogeneity of clients’ privacy requirements, which affects client selection and complicates convergence analysis. To address this, a generic client selection strategy is formulated as a convex optimization problem, solvable via the proposed DPFL-BCS algorithm. Extensive experiments on real datasets demonstrate significant improvements in model utility compared to state-of-the-art baselines. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary DPFL is a new way to learn models by combining data from many devices without sharing their original data. Instead of sharing data, clients share gradients (small changes) that help train the model. To keep these gradients private, DP noises are added to make them harder to understand. The problem is that different clients might need different levels of privacy protection, making it hard to choose which clients to use for training. This paper solves this problem by analyzing how well the model converges with different client selection strategies and popular DP mechanisms. A new algorithm called DPFL-BCS is proposed, which can efficiently solve the optimization problem and improve model quality. |
Keywords
» Artificial intelligence » Federated learning » Optimization