Loading Now

Summary of Transformers Learn Low Sensitivity Functions: Investigations and Implications, by Bhavya Vasudeva et al.


Transformers Learn Low Sensitivity Functions: Investigations and Implications

by Bhavya Vasudeva, Deqing Fu, Tianyi Zhou, Elliott Kau, Youqi Huang, Vatsal Sharan

First submitted to arxiv on: 11 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The abstract discusses the inductive biases of transformers, a type of neural network architecture that achieves state-of-the-art accuracy and robustness across various tasks. The study identifies the sensitivity of the model to token-wise random perturbations in the input as a unified metric that explains the inductive bias of transformers and distinguishes them from other architectures like MLPs, CNNs, ConvMixers, and LSTMs. The authors show that transformers have lower sensitivity than these other architectures across both vision and language tasks. This low-sensitivity bias has important implications: it correlates with improved robustness, can be used as an efficient intervention to further improve the robustness of transformers, corresponds to flatter minima in the loss landscape, and serves as a progress measure for grokking.
Low GrooveSquid.com (original content) Low Difficulty Summary
Transformers are super smart at doing lots of things, but scientists didn’t fully understand why they work so well. This paper helps figure out what makes transformers special. The main idea is that transformers are very good at handling small changes in the input data without getting confused. They show that transformers are better than other types of neural networks at this task, which means they’re also more robust to mistakes or noise in the data. This can be useful for making sure AI systems don’t get tricked into doing something bad.

Keywords

* Artificial intelligence  * Neural network  * Token