Summary of Privacy For Fairness: Information Obfuscation For Fair Representation Learning with Local Differential Privacy, by Songjie Xie et al.
Privacy for Fairness: Information Obfuscation for Fair Representation Learning with Local Differential Privacy
by Songjie Xie, Youlong Wu, Jiaxuan Li, Ming Ding, Khaled B. Letaief
First submitted to arxiv on: 16 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR); Information Theory (cs.IT)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a theoretical framework for examining the interrelation between privacy and fairness in machine learning (ML). As ML becomes more prevalent in human-centric applications, there is a growing emphasis on algorithmic fairness and privacy protection. The authors develop an information bottleneck (IB) based information obfuscation method with local differential privacy (LDP) for fair representation learning. They show that incorporating LDP randomizers during the encoding process can enhance the fairness of the learned representation. The proposed method also achieves practical advantages, such as being trained using a non-adversarial method and not requiring the introduction of any variational prior. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about how to make machine learning algorithms fair and private. It’s like trying to keep secrets while still sharing information. The authors came up with a new way to do this by combining two ideas: making sure information is hidden (private) and making sure everyone is treated fairly. They showed that their method can work well in practice, which is important for building trust in AI systems. |
Keywords
* Artificial intelligence * Machine learning * Representation learning