Summary of Feature Contamination: Neural Networks Learn Uncorrelated Features and Fail to Generalize, by Tianren Zhang et al.
Feature contamination: Neural networks learn uncorrelated features and fail to generalize
by Tianren Zhang, Chujie Zhao, Guanyu Chen, Yizhou Jiang, Feng Chen
First submitted to arxiv on: 5 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper explores the limitations of deep neural networks in generalizing under distribution shifts. Despite recent advances, the fundamental difficulty of out-of-distribution generalization remains understudied. The authors empirically show that even allowing a student network to fit representations obtained from a teacher network that can generalize out-of-distribution is insufficient for the student’s own generalization. Through theoretical analysis of two-layer ReLU networks optimized by SGD under a structured feature model, the study identifies a new mechanism called “feature contamination,” where neural networks learn uncorrelated features together with predictive features, leading to generalization failure. This finding contradicts the prevailing narrative that attributes generalization failure to spurious correlations. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary In this paper, researchers try to understand why deep neural networks often struggle to work well when given new data they haven’t seen before. They find that even if a network is trained using a teacher network that can handle new data, it still might not be able to handle new data itself. By studying how these networks learn features from data, the researchers discover a new problem called “feature contamination.” This means that the network learns irrelevant features along with important ones, making it harder for the network to work well on new data. |
Keywords
» Artificial intelligence » Generalization » Relu