Loading Now

Summary of Why Are Sensitive Functions Hard For Transformers?, by Michael Hahn et al.


Why are Sensitive Functions Hard for Transformers?

by Michael Hahn, Mark Rofin

First submitted to arxiv on: 15 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Transformers have been shown to struggle with certain tasks, such as learning simple formal languages like PARITY and favoring low-degree functions over others. While empirical studies have identified these limitations, theoretical understanding has been lacking. In this paper, the authors prove that transformers’ output sensitivity affects their ability to generalize, leading to a bias towards low-sensitivity inputs. This theory unifies various observed biases in transformer learning, including their tendency to favor low-degree and short-length functions. By studying both expressivity and loss landscape, researchers can gain a deeper understanding of transformers’ inductive biases.
Low GrooveSquid.com (original content) Low Difficulty Summary
Transformers are powerful AI models that have been used for many tasks. However, they have some limitations when it comes to learning new things. For example, they struggle with simple math problems like PARITY and tend to prefer easier problems over harder ones. Researchers wanted to understand why transformers behave this way. They found that the way transformers process information affects how well they can generalize, or apply what they’ve learned to new situations. This means that transformers often make mistakes when dealing with complex or longer input strings. By studying how transformers work and what makes them good at some tasks but not others, researchers can create better AI models in the future.

Keywords

* Artificial intelligence  * Transformer