Summary of Using Synthetic Data to Mitigate Unfairness and Preserve Privacy in Collaborative Machine Learning, by Chia-yuan Wu et al.
Using Synthetic Data to Mitigate Unfairness and Preserve Privacy in Collaborative Machine Learning
by Chia-Yuan Wu, Frank E. Curtis, Daniel P. Robinson
First submitted to arxiv on: 14 Sep 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computers and Society (cs.CY); Optimization and Control (math.OC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers propose a two-stage strategy to promote fair predictions, prevent client-data leakage, and reduce communication costs in distributed machine learning environments. The method tackles unfairness concerns by utilizing synthetic datasets generated through bilevel optimization problems and differential privacy guarantees. By passing only these synthetic datasets to the server, the approach eliminates the need for fairness-specific aggregation weights while preserving client privacy. Empirical evidence demonstrates the advantages of this cost-effective and privacy-preserving approach. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Distributed machine learning allows multiple devices to work together on a big project. This is cool! But it can also be tricky because people might not want to share their data with each other. To solve this problem, scientists came up with an idea: create fake datasets that are like the real ones, but not quite as sensitive. They did this by solving a special math problem and then adding some extra protection called differential privacy. By sharing only these fake datasets, they can make sure everyone stays private while still getting good results. |
Keywords
» Artificial intelligence » Machine learning » Optimization