Loading Now

Summary of Initialization Is Critical to Whether Transformers Fit Composite Functions by Reasoning or Memorizing, By Zhongwang Zhang et al.


Initialization is Critical to Whether Transformers Fit Composite Functions by Reasoning or Memorizing

by Zhongwang Zhang, Pengxiao Lin, Zhiwei Wang, Yaoyu Zhang, Zhi-Qin John Xu

First submitted to arxiv on: 8 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Transformers have shown impressive capabilities across various tasks, but their performance on compositional problems remains a topic of debate. This work investigates how transformers behave on unseen compositional tasks and discovers that parameter initialization scale plays a critical role in determining whether the model learns inferential or symmetric solutions. Analyzing information flow and vector representations within the model reveals distinct mechanisms underlying these solution types. Inferential solutions exhibit low complexity bias, which enables them to learn individual mappings for single anchors. The findings provide valuable insights into the role of initialization scale in tuning reasoning and memorizing ability, proposing the initialization rate γ as a tunable hyper-parameter.
Low GrooveSquid.com (original content) Low Difficulty Summary
Transformers are great at many things, but some problems are harder for them. This study looks at why transformers do well or poorly on certain types of puzzles. It finds that how they start out (like setting the difficulty level) really matters. Some ways of starting out make the transformer figure things out, while others just help it remember what it’s seen before. The researchers think this is important for understanding how to get the best from transformers and other AI models.

Keywords

» Artificial intelligence  » Transformer