Summary of Do Parameters Reveal More Than Loss For Membership Inference?, by Anshuman Suri et al.
Do Parameters Reveal More than Loss for Membership Inference?
by Anshuman Suri, Xiao Zhang, David Evans
First submitted to arxiv on: 17 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers investigate the efficacy of membership inference attacks in detecting whether individual records were used to train machine learning models. They challenge previous claims that black-box access is sufficient for optimal membership inference, showing instead that white-box access is necessary. The authors propose a new attack method called IHA (Inverse Hessian Attack), which utilizes model parameters to improve the accuracy of membership inference. This work has implications for both auditors and adversaries seeking to exploit membership inference attacks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Membership inference attacks are used to figure out if someone’s personal information was used to train a machine learning model. These attacks can help demonstrate how risky it is, but they’re often too expensive or make unrealistic assumptions about who might have access to the models. The researchers in this study show that previous claims about being able to do these attacks without actually seeing the model were wrong. They need direct access to see how the model works. The authors also propose a new way of doing membership inference attacks, called IHA (Inverse Hessian Attack), which uses information about the model’s inner workings. This has implications for both people trying to keep their data safe and those trying to hack into these models. |
Keywords
* Artificial intelligence * Inference * Machine learning