Summary of Label Noise Robustness For Domain-agnostic Fair Corrections Via Nearest Neighbors Label Spreading, by Nathan Stromberg and Rohan Ayyagari and Sanmi Koyejo and Richard Nock and Lalitha Sankar
Label Noise Robustness for Domain-Agnostic Fair Corrections via Nearest Neighbors Label Spreading
by Nathan Stromberg, Rohan Ayyagari, Sanmi Koyejo, Richard Nock, Lalitha Sankar
First submitted to arxiv on: 13 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, the authors propose a new method for correcting last-layer retrained models that are susceptible to noisy labels. They demonstrate that their approach achieves state-of-the-art worst-group accuracy for a range of symmetric label noise levels and datasets with spurious correlations. The method uses label spreading on a latent nearest neighbors graph and has minimal computational overhead. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about finding ways to fix mistakes in computer models that learn from data. Sometimes, these models make errors because the training data isn’t perfect. The authors of this paper came up with a new way to correct these mistakes. They tested their method on many different types of data and showed that it works really well, even when there’s a lot of noise or errors in the data. |