Loading Now

Summary of Better Membership Inference Privacy Measurement Through Discrepancy, by Ruihan Wu et al.


Better Membership Inference Privacy Measurement through Discrepancy

by Ruihan Wu, Pengrun Huang, Kamalika Chaudhuri

First submitted to arxiv on: 24 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Membership Inference Attacks (MIAs) have become a dominant method for empirically measuring privacy leakage from machine learning models. The goal is to quantify the advantage or gap between scores computed on training and test data. A major challenge in practical deployment is that MIAs do not scale well to large, well-generalized models – either the advantage is low or the attack involves training multiple models, which is computationally expensive. This work proposes a new empirical privacy metric inspired by discrepancy theory, which provides an upper bound on the advantage of a family of MIAs. The metric can be applied to large ImageNet classification models in-the-wild and has higher advantage than existing metrics for models trained with recent, sophisticated training recipes.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine you have a secret recipe book that only certain people know how to make. You want to keep it private, so you don’t let others use your recipe ideas without permission. This paper is about finding ways to measure how well someone can guess if they’re using your recipe or not. The problem is that these methods are hard to use with very large and complicated recipe books. In this work, the authors create a new way to measure privacy that doesn’t require creating many different recipe books. They show that their method works better than others for really big and complex recipe books.

Keywords

» Artificial intelligence  » Classification  » Inference  » Machine learning