Loading Now

Summary of On the Origins Of Linear Representations in Large Language Models, by Yibo Jiang et al.


On the Origins of Linear Representations in Large Language Models

by Yibo Jiang, Goutham Rajendran, Pradeep Ravikumar, Bryon Aragam, Victor Veitch

First submitted to arxiv on: 6 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the origins of high-level semantic concepts in large language models’ representation space, which are believed to be encoded “linearly”. The authors propose a simple latent variable model to formalize and abstract concept dynamics in next token prediction. They show that the combination of the softmax with cross-entropy objective and gradient descent’s implicit bias promotes linear representations. Experimental results confirm that linear representations emerge when learning from data matching the proposed model, providing generalizable insights.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models store high-level concepts like words or phrases “linearly” in their representation space. But what makes this possible? A new study tries to find out by using a simple mathematical framework to describe how language models learn. The researchers show that two key factors – the way the model is trained and the data it’s given – are responsible for making concepts like words or phrases easy to understand. They tested their ideas on a large language model and found that they hold up.

Keywords

* Artificial intelligence  * Cross entropy  * Gradient descent  * Large language model  * Softmax  * Token