Summary of From Discrete to Continuous: Deep Fair Clustering with Transferable Representations, by Xiang Zhang
From Discrete to Continuous: Deep Fair Clustering With Transferable Representations
by Xiang Zhang
First submitted to arxiv on: 24 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computer Vision and Pattern Recognition (cs.CV); Computers and Society (cs.CY)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Deep learning-based fair clustering is a crucial problem in data partitioning, which hides sensitive attributes while grouping data into clusters. The existing methods focus on fairness-related objective functions based on group fairness criteria, but these methods are limited to discrete sensitive variables and neglect the potential of learned representations for improving performance on other tasks. To address these limitations, we propose a flexible deep fair clustering method that can handle both discrete and continuous sensitive attributes simultaneously. Our approach is based on an information bottleneck style objective function that learns fair and clustering-friendly representations. Furthermore, we explore the transferability of extracted representations to other downstream tasks, ensuring fairness at the representation level regardless of clustering results. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Fair clustering with deep learning helps hide sensitive data attributes while grouping data into clusters. The existing methods only work for discrete sensitive variables and don’t consider how learned representations can improve performance on other tasks. To fix this, we created a new way to do fair clustering that works for both continuous and discrete sensitive attributes. We used an information bottleneck style objective function to learn fair and clustering-friendly representations. We also looked at how these learned representations can be transferred to other tasks, making sure fairness is maintained. |
Keywords
* Artificial intelligence * Clustering * Deep learning * Objective function * Transferability