Summary of Contrastive Explainable Clustering with Differential Privacy, by Dung Nguyen et al.
Contrastive explainable clustering with differential privacy
by Dung Nguyen, Ariel Vetzler, Sarit Kraus, Anil Vullikanti
First submitted to arxiv on: 7 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper presents a novel approach in Explainable AI (XAI), combining contrastive explanations with differential privacy in clustering methods. The authors develop efficient differential private contrastive explanations for basic clustering problems, including k-median and k-means, achieving essentially the same explanations as non-private clustering methods. Contrastive explanations measure the utility difference between original and fixed centroid clustering utilities under differential privacy. Extensive experiments across various datasets demonstrate the effectiveness of this approach in providing meaningful explanations without compromising data privacy or clustering utility. This work contributes to privacy-aware machine learning, showcasing the feasibility of balancing privacy and utility in explaining clustering tasks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us understand how we can explain why a computer program groups things together (like people or objects). They want to make sure that this explanation doesn’t reveal any personal information about the individuals being grouped. To do this, they combine two ideas: contrastive explanations and differential privacy. Contrastive explanations measure how different it is to group things in one way versus another way. Differential privacy makes sure that even if we share our grouping method with someone else, they won’t be able to figure out what individual information belongs to each person or object. The authors test this approach on many datasets and show that it works well without compromising data privacy or the quality of the groupings. |
Keywords
* Artificial intelligence * Clustering * K means * Machine learning