Loading Now

Summary of Simplicity Bias Of Two-layer Networks Beyond Linearly Separable Data, by Nikita Tsoy et al.


Simplicity Bias of Two-Layer Networks beyond Linearly Separable Data

by Nikita Tsoy, Nikola Konstantinov

First submitted to arxiv on: 27 May 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Machine Learning (cs.LG); Optimization and Control (math.OC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates a phenomenon known as simplicity bias, where deep models tend to rely too heavily on simple features, leading to limited out-of-distribution generalization of neural networks. The authors characterize this bias for general datasets in the context of two-layer neural networks initialized with small weights and trained with gradient flow. They prove that early training phases result in network features clustering around a few directions independent of the hidden layer size. Furthermore, they identify the learned features on XOR-like pattern datasets and demonstrate that simplicity bias intensifies during later training stages. The authors support their findings with image data experiments.
Low GrooveSquid.com (original content) Low Difficulty Summary
Simplicity bias is when deep models rely too much on simple things instead of complex ones. This can make it hard for them to work well outside the training data. Researchers have found that this happens under certain conditions, but they don’t know if it’s true in general. In this study, scientists looked at how two-layer neural networks behave when initialized with small weights and trained with gradient flow. They discovered that early on, the network features group together in a few ways that don’t depend on the hidden layer size. For certain types of data, they found out what these features are and saw that simplicity bias gets worse as training goes on. They tested their findings using image data.

Keywords

» Artificial intelligence  » Clustering  » Generalization