Loading Now

Summary of Enhancing Dp-sgd Through Non-monotonous Adaptive Scaling Gradient Weight, by Tao Huang et al.


Enhancing DP-SGD through Non-monotonous Adaptive Scaling Gradient Weight

by Tao Huang, Qingyu Huang, Xin Shi, Jiayang Meng, Guolong Zheng, Xu Yang, Xun Yi

First submitted to arxiv on: 5 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel approach to differential privacy in deep learning is introduced, which maintains model utility while protecting sensitive data. The enhanced Differentially Private Per-sample Adaptive Scaling Clipping (DP-PSASC) method replaces traditional gradient clipping with adaptive scaling, alleviating the need for threshold setting and improving learning under differential privacy. This contribution consists of two parts: a novel gradient scaling technique that assigns proper weights to gradients, particularly small ones, and a momentum-based method integrated into DP-PSASC to reduce bias from stochastic sampling and enhance convergence rates. Theoretical and empirical analyses confirm that DP-PSASC preserves privacy and delivers superior performance across diverse datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
A new way is found to keep personal data safe while still getting good results with deep learning models. This is done by changing the way gradients are handled in an algorithm called Differentially Private Per-sample Adaptive Scaling Clipping (DP-PSASC). This makes it possible to set a better balance between protecting privacy and doing well on a task. The approach has two parts: one part helps assign good weights to all gradients, even small ones, and the other part uses momentum to help the algorithm learn faster.

Keywords

* Artificial intelligence  * Deep learning