Summary of Explanation Is All You Need in Distillation: Mitigating Bias and Shortcut Learning, by Pedro R. A. S. Bassi et al.
Explanation is All You Need in Distillation: Mitigating Bias and Shortcut Learning
by Pedro R. A. S. Bassi, Andrea Cavalli, Sergio Decherchi
First submitted to arxiv on: 13 Jul 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Image and Video Processing (eess.IV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Medium Difficulty summary: This paper addresses the issue of shortcut learning in deep neural networks, which can occur when biased or spurious correlations in data cause models to learn shortcuts instead of generalizable representations. The authors propose a novel approach called explanation distillation, which leverages the idea of explaining the decisions made by an unbiased teacher model (such as a vision-language model) using techniques like Layer Relevance Propagation (LRP). This allows a student network to learn the reasons behind the teacher’s decisions without requiring access to unbiased data during training. The authors demonstrate that explanation distillation can lead to high resistance to shortcut learning, surpassing alternative methods like group-invariant learning and explanation background minimization. In particular, LRP distillation achieves 98.2% out-of-distribution (OOD) accuracy on the COLOURED MNIST dataset, while deep feature distillation and IRM achieve 92.1% and 60.2%, respectively. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Low Difficulty summary: This paper is about making sure that AI models don’t learn shortcuts in data that aren’t real. Sometimes, models can learn to recognize patterns in the training data that aren’t really important, which makes them less good at recognizing new things they haven’t seen before. The authors developed a new way to make models think more deeply about what’s important by having them explain why they made certain decisions. This helps the model avoid learning shortcuts and become better at recognizing new things. |
Keywords
» Artificial intelligence » Distillation » Language model » Teacher model