Loading Now

Summary of Let’s Think Dot by Dot: Hidden Computation in Transformer Language Models, By Jacob Pfau et al.


Let’s Think Dot by Dot: Hidden Computation in Transformer Language Models

by Jacob Pfau, William Merrill, Samuel R. Bowman

First submitted to arxiv on: 24 Apr 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Chain-of-thought responses from transformers improve performance across most benchmarks, but it’s unclear whether this is due to human-like task decomposition or increased computation. Our study shows that transformers can use meaningless filler tokens (e.g., ‘……’) to solve two hard algorithmic tasks they couldn’t solve without intermediate tokens. However, we found that learning to use filler tokens requires specific supervision and dense training data to converge. We also provide a theoretical characterization of the problems where filler tokens are useful in terms of quantifier depth. Our results demonstrate that additional tokens can provide computational benefits independent of token choice. This raises concerns about large language models engaging in unauditable, hidden computations that are increasingly detached from observed chain-of-thought tokens.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how big language models do math problems. They found that these models can use extra steps to solve hard problems even if the extra steps don’t make sense. This is different from humans who use logical steps to solve problems. The models can get really good at this without actually understanding what they’re doing. This makes us wonder about how we can trust these models when they do things that seem weird but are still correct.

Keywords

» Artificial intelligence  » Token