Summary of Wasserstein Differential Privacy, by Chengyi Yang et al.
Wasserstein Differential Privacy
by Chengyi Yang, Jiayin Qi, Aimin Zhou
First submitted to arxiv on: 23 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed Wasserstein differential privacy (WDP) framework is an alternative to existing differential privacy (DP) frameworks that measures the risk of privacy leakage while satisfying the properties of symmetry and triangle inequality. WDP has 13 excellent properties, which provide theoretical support for its better performance compared to other DP frameworks. The framework is applied in stochastic gradient descent (SGD) scenarios containing sub-sampling using a general privacy accounting method called Wasserstein accountant. Experimental results show that the privacy budgets obtained by Wasserstein accountant are relatively stable and less influenced by order, alleviating overestimation on privacy budgets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Wasserstein differential privacy is a new way to measure how much privacy is lost when sharing data. It helps prevent big tech companies from getting too much information about us. This approach has 13 good things going for it that make it better than other methods. It’s used in machine learning, which is like training computers to learn and get smarter. The results show that this method works well and keeps privacy budgets stable. |
Keywords
* Artificial intelligence * Machine learning * Stochastic gradient descent