Loading Now

Summary of Token Erasure As a Footprint Of Implicit Vocabulary Items in Llms, by Sheridan Feucht et al.


Token Erasure as a Footprint of Implicit Vocabulary Items in LLMs

by Sheridan Feucht, David Atkinson, Byron Wallace, David Bau

First submitted to arxiv on: 28 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates how Large Language Models (LLMs) process text as sequences of tokens. Unlike common words, individual tokens are often unrelated to their meanings, making it difficult to infer higher-level representations. For instance, the word “northeastern” is broken down into [’_n’, ‘ort’, ‘he’, ‘astern’], which don’t correspond to semantically meaningful units like “north” or “east”. The study reveals that last token representations of named entities and multi-token words exhibit an “erasure” effect in early layers, where information about previous tokens is rapidly forgotten. To address this challenge, the authors propose a method to read out the implicit vocabulary of autoregressive LLMs by examining layer-wise differences in token representations. The results demonstrate this approach for Llama-2-7b and Llama-3-8B.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how computers understand text by breaking it down into small pieces called tokens. However, these tokens don’t always give us the meaning of the words. For example, the word “northeastern” is broken down into tiny pieces that don’t relate to the concepts like “north” or “east”. The researchers found that when they looked at how computers process named entities and phrases, they noticed that early layers quickly forget information about previous tokens. To solve this problem, the authors developed a new method to figure out what computers are really understanding from text. They applied this approach to two large language models, Llama-2-7b and Llama-3-8B.

Keywords

» Artificial intelligence  » Autoregressive  » Llama  » Token