Summary of Optimal Defenses Against Gradient Reconstruction Attacks, by Yuxiao Chen et al.
Optimal Defenses Against Gradient Reconstruction Attacks
by Yuxiao Chen, Gamze Gürsoy, Qi Lei
First submitted to arxiv on: 6 Nov 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper explores ways to improve Federated Learning (FL) security against gradient reconstruction attacks, which recover original training data from shared gradients. The authors derive a theoretical lower bound for two common defenses: adding noise and pruning gradients. They then customize these methods to be parameter- and model-specific, optimizing the trade-off between data leakage and utility loss. Experimental results show that their approaches outperform existing methods in balancing protection of training data with model performance. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making sure people can share learning models without accidentally sharing private information from their own data. Some bad guys might try to take this shared information and figure out what the original data looked like. The authors are trying to find ways to make it harder for these attackers to succeed, while also keeping the model working well. They came up with new methods that can be used with specific models or parameters to balance how much they protect the data with how well the model works. |
Keywords
» Artificial intelligence » Federated learning » Pruning