Loading Now

Summary of Accuracy on the Wrong Line: on the Pitfalls Of Noisy Data For Out-of-distribution Generalisation, by Amartya Sanyal et al.


Accuracy on the wrong line: On the pitfalls of noisy data for out-of-distribution generalisation

by Amartya Sanyal, Yaxi Hu, Yaodong Yu, Yian Ma, Yixin Wang, Bernhard Schölkopf

First submitted to arxiv on: 27 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The “Accuracy-on-the-line” phenomenon is observed in machine learning, where models’ accuracy on in-distribution (ID) and out-of-distribution (OOD) data are positively correlated. This paper explores the robustness of this relationship and finds that it breaks down when dealing with noisy data or nuisance features. The authors show that these conditions can lead to a negative correlation between ID and OOD accuracy, resulting in “Accuracy-on-the-wrong-line”. They also prove a lower bound on OOD error in linear classification models and demonstrate this phenomenon across synthetic and real datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
Machine learning has an interesting problem called “Accuracy-on-the-line”. This means that when you test how well a model works on some data, it usually does better if the data is similar to what it was trained on. But sometimes this doesn’t happen. In fact, the more weird or different the new data is, the worse the model might perform. This paper looks at why this happens and finds that it’s often because of noise in the data (like mistakes or random stuff) or features that aren’t important for what we’re trying to do.

Keywords

* Artificial intelligence  * Classification  * Machine learning