Loading Now

Summary of Out Of Spuriousity: Improving Robustness to Spurious Correlations Without Group Annotations, by Phuong Quynh Le et al.


Out of spuriousity: Improving robustness to spurious correlations without group annotations

by Phuong Quynh Le, Jörg Schlötterer, Christin Seifert

First submitted to arxiv on: 20 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Machine learning models often learn correlations between features and class labels that aren’t causally related. This can lead to poor performance when dealing with new data without these correlations, reducing generalization ability. To address this, we propose an approach to extract a subnetwork from a fully trained network that doesn’t rely on spurious correlations. By leveraging the assumption that data points with similar attributes are close in the representation space after training with ERM, and employing a novel supervised contrastive loss, our approach forces models to unlearn these connections. This leads to an increase in the worst-group performance, supporting the hypothesis that there’s a subnetwork responsible for using invariant features in classification tasks, erasing spurious influences even in multi-attribute scenarios.
Low GrooveSquid.com (original content) Low Difficulty Summary
Machine learning models sometimes learn connections between features and class labels that aren’t real. This can make them perform poorly when dealing with new data without these connections. Our solution is to find a part of the trained model that doesn’t use these fake connections. We do this by assuming that data points with similar characteristics are close together in a special space, and then using a special type of loss function to help the model forget about these fake connections. This makes our approach better at handling new data and generalizing well.

Keywords

» Artificial intelligence  » Classification  » Contrastive loss  » Generalization  » Loss function  » Machine learning  » Supervised