Loading Now

Summary of Disk: Differentially Private Optimizer with Simplified Kalman Filter For Noise Reduction, by Xinwei Zhang et al.


DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise Reduction

by Xinwei Zhang, Zhiqi Bu, Borja Balle, Mingyi Hong, Meisam Razaviyayn, Vahab Mirrokni

First submitted to arxiv on: 4 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed paper introduces a novel framework called DiSK that enhances the performance of differentially private (DP) optimizers for training machine learning models. The authors address the limitation of existing DP optimizers, which suffer from a significant performance drop when applied to large-scale training due to excessive noise injection required to maintain DP. DiSK employs Kalman filtering to denoise privatized gradients and generate progressively refined gradient estimations. This approach is designed to improve the iteration complexity upper-bound while maintaining practicality for large-scale training. The authors demonstrate provable improvements over standard DP optimizers like DPSGD across diverse tasks, including vision tasks such as CIFAR-100 and ImageNet-1k, and language fine-tuning tasks such as GLUE, E2E, and DART.
Low GrooveSquid.com (original content) Low Difficulty Summary
Differently private machine learning models are designed to protect individual data privacy. The problem is that current methods can be slow or make errors when training large models. Researchers have developed a new way to improve these models by using something called Kalman filtering. This helps reduce the noise in the model’s calculations, making it work better and faster. The team tested this new method on many different types of problems, such as recognizing images and understanding language. They found that it worked much better than previous methods, even when training very large models.

Keywords

» Artificial intelligence  » Fine tuning  » Machine learning