Summary of Differentially Private Block-wise Gradient Shuffle For Deep Learning, by David Zagardo
Differentially Private Block-wise Gradient Shuffle for Deep Learning
by David Zagardo
First submitted to arxiv on: 31 Jul 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper introduces Differentially Private Block-wise Gradient Shuffle (DP-BloGS), a novel algorithm for deep learning that builds upon existing private deep learning literature. Unlike traditional differentially private stochastic gradient descent (DP-SGD), DP-BloGS uses a probabilistic approach to introduce noise through shuffling, modeled after information theoretic privacy analyses. The authors show that the combination of shuffling, parameter-specific block size selection, batch layer clipping, and gradient accumulation allows DP-BloGS to achieve training times close to non-private training while maintaining similar privacy and utility guarantees as DP-SGD. Additionally, DP-BloGS is found to be more resistant to data extraction attempts than DP-SGD. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper introduces a new way of doing deep learning that keeps people’s private information safe. It’s called Differentially Private Block-wise Gradient Shuffle (DP-BloGS). The idea is to mix up the numbers that help the computer learn, making it harder for someone to figure out what they’re looking at. The results show that this method is faster than other ways of doing private deep learning and keeps people’s information safer. |
Keywords
» Artificial intelligence » Deep learning » Stochastic gradient descent