Summary of Inf2guard: An Information-theoretic Framework For Learning Privacy-preserving Representations Against Inference Attacks, by Sayedeh Leila Noorbakhsh et al.
Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
by Sayedeh Leila Noorbakhsh, Binghui Zhang, Yuan Hong, Binghui Wang
First submitted to arxiv on: 4 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Machine learning is susceptible to various types of inference attacks that aim to reveal private information about training data or datasets. Current defenses are often tailored to a specific type of attack, sacrificing utility or being easily circumvented by adaptive attacks. This paper proposes Inf2Guard, an information-theoretic defense framework that counteracts the three primary types of inference attacks: membership inference, property inference, and data reconstruction. The framework is inspired by representation learning and involves two mutual information objectives for privacy protection and utility preservation. Inf2Guard demonstrates several merits, including customized objectives against specific attacks, a general defense framework for treating existing defenses as special cases, and theoretical results like the inherent utility-privacy tradeoff and guaranteed privacy leakage. Experimental evaluations validate the effectiveness of Inf2Guard in learning privacy-preserving representations against inference attacks and demonstrate its superiority over baselines. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Machine learning is vulnerable to attacks that try to find out private information about training data or datasets. Current defenses are usually only good at stopping one type of attack, and they often sacrifice being useful for other tasks or can be easily broken by more advanced attacks. This paper introduces a new way to defend against these attacks called Inf2Guard. It’s based on learning shared representations that can benefit many different tasks. The framework involves two goals: keep private information private and make the model useful for other tasks. Inf2Guard has several good qualities, including being able to design custom defenses against specific attacks, being a general defense framework that can be used with existing defenses, and providing theoretical results like the tradeoff between keeping things private and making them useful. |
Keywords
* Artificial intelligence * Inference * Machine learning * Representation learning