Loading Now

Summary of Auditing Differential Privacy Guarantees Using Density Estimation, by Antti Koskela and Jafar Mohammadi


Auditing Differential Privacy Guarantees Using Density Estimation

by Antti Koskela, Jafar Mohammadi

First submitted to arxiv on: 7 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents a novel method for auditing differential privacy (DP) guarantees of machine learning (ML) models, particularly those trained using DP-SGD. The proposed solution is agnostic to the randomization used in the underlying mechanism and does not require prior knowledge of the parametric form of the noise or subsampling ratio. Building on previous work, this approach uses a histogram-based density estimation technique to find lower bounds for the statistical distance between two one-dimensional distributions corresponding to outputs from neighboring datasets. The method generalizes threshold membership inference auditing methods and improves upon accurate auditing techniques like f-DP auditing. This research addresses an open problem in accurately auditing subsampled Gaussian mechanisms without prior knowledge of their parameters.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about finding a new way to check if a machine learning model is private, meaning it protects people’s personal information. Currently, checking this requires knowing some details about how the model was trained. The researchers developed a method that doesn’t need this prior knowledge and can work with different types of models. They use a simple counting technique to estimate the difference between two sets of data. This method is better than previous ones at auditing machine learning models for privacy. It also helps solve an open problem in ensuring the privacy of certain types of Gaussian mechanisms.

Keywords

* Artificial intelligence  * Density estimation  * Inference  * Machine learning