Loading Now

Summary of Learning with User-level Local Differential Privacy, by Puning Zhao et al.


Learning with User-Level Local Differential Privacy

by Puning Zhao, Li Shen, Rongfei Fan, Qingming Li, Huiwen Wu, Jiafei Wu, Zhe Liu

First submitted to arxiv on: 27 May 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates user-level privacy in distributed systems, focusing on the local models that have received less attention compared to central models. The researchers analyze mean estimation and apply it to various tasks like stochastic optimization, classification, and regression. They propose adaptive strategies for optimal performance at all privacy levels and establish information-theoretic lower bounds showing their methods are minimax optimal up to logarithmic factors. Unlike the central DP model, where user-level DP leads to slower convergence, the local model shows similar convergence rates between user-level and item-level cases for bounded-support distributions. For heavy-tailed distributions, user-level privacy even outperforms item-level privacy.
Low GrooveSquid.com (original content) Low Difficulty Summary
Distributed systems need strong user-level privacy. Most research focuses on central models, but local models are important too. This paper looks at how to keep users private in a distributed system using the local model. They solve a mean estimation problem and apply it to different tasks like optimization, classification, and regression. The researchers also find ways to make their methods work well for all levels of privacy and show that they’re close to being perfect. In some cases, user-level privacy is even better than item-level privacy.

Keywords

» Artificial intelligence  » Attention  » Classification  » Optimization  » Regression