Summary of Unlearning From Experience to Avoid Spurious Correlations, by Jeff Mitchell et al.
UnLearning from Experience to Avoid Spurious Correlations
by Jeff Mitchell, Jesús Martínez del Rincón, Niall McLaughlin
First submitted to arxiv on: 4 Sep 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a new approach called UnLearning from Experience (ULE) to address the issue of spurious correlations in deep neural networks. The authors suggest that current models are prone to learning such correlations, leading to unexpected failure cases. To mitigate this, they propose training two classification models in parallel: a student model and a teacher model. The student model is trained without constraints and learns the spurious correlations, while the teacher model is trained to avoid these mistakes. As the student model improves, the teacher model becomes more robust. The authors demonstrate the effectiveness of ULE on four datasets: Waterbirds, CelebA, Spawrious, and UrbanCars. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us understand how our current AI models can make mistakes. It shows that even though these models are very good at doing certain tasks, they can still learn things that aren’t really important. The authors propose a new way to train two AI models together, which helps the better model learn from its mistakes and become more reliable. They tested this method on four different datasets and showed it works well. |
Keywords
» Artificial intelligence » Classification » Student model » Teacher model