Summary of Learning Decomposable and Debiased Representations Via Attribute-centric Information Bottlenecks, by Jinyung Hong et al.
Learning Decomposable and Debiased Representations via Attribute-Centric Information Bottlenecks
by Jinyung Hong, Eun Som Jeon, Changhoon Kim, Keun Hee Park, Utkarsh Nath, Yezhou Yang, Pavan Turaga, Theodore P. Pavlic
First submitted to arxiv on: 21 Mar 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper presents a novel debiasing framework called Debiasing Global Workspace that aims to learn compositional representations of attributes without defining specific bias types. The framework introduces attention-based information bottlenecks for learning robust and generalizable representations of decomposable latent embeddings corresponding to intrinsic and biasing attributes. By learning shape-centric representations, the approach achieves robust performance on out-of-distribution (OOD) datasets. The paper conducts comprehensive evaluations on biased datasets, including both quantitative and qualitative analyses, to demonstrate its efficacy in attribute-centric representation learning and its ability to differentiate between intrinsic and bias-related features. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about a new way to fix neural networks that learn bad shortcuts when given biased data. Right now, many approaches are trying to debias this kind of data to get more accurate predictions. But few studies have focused on what the model actually “sees” in the data and how it pays attention to certain attributes. The paper proposes a new framework called Debiasing Global Workspace that helps models learn better representations of these attributes without knowing what specific biases are present. By learning about shapes and other intrinsic features, the approach gets better at making predictions on new, unseen data. |
Keywords
* Artificial intelligence * Attention * Representation learning