Loading Now

Summary of Understanding Token Probability Encoding in Output Embeddings, by Hakaze Cho et al.


Understanding Token Probability Encoding in Output Embeddings

by Hakaze Cho, Yoshihiro Sakai, Kenshiro Tanaka, Mariko Kato, Naoya Inoue

First submitted to arxiv on: 3 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers investigate how language models process output token probabilities. They discover a common log-linear encoding within the model’s output embedding vectors and demonstrate its accuracy and sparsity. The team also examines the causal effects of modifying this encoding and finds that many dimensions in the output embedding don’t contribute to language modeling. By deleting these unimportant dimensions, they can reduce the dimensionality by over 30% without affecting the model’s performance or generated sequences. Additionally, the study reveals that language models capture corpus token frequency information early on during pre-training.
Low GrooveSquid.com (original content) Low Difficulty Summary
Language models process text in a way that shows how likely it is for certain words to come next. This paper looks at what makes these predictions work. They find that there’s a special code hidden inside the model’s output that explains how likely each word is to appear. This code is important, but they also discover that most of the extra information in this code isn’t actually used by the model. By getting rid of this extra information, they can make the model faster and more efficient without changing what it does. They also find that the model starts learning about common words and phrases very early on during training.

Keywords

» Artificial intelligence  » Embedding  » Token