Loading Now

Summary of Understanding and Mitigating Tokenization Bias in Language Models, by Buu Phan et al.


Understanding and Mitigating Tokenization Bias in Language Models

by Buu Phan, Marton Havasi, Matthew Muckley, Karen Ullrich

First submitted to arxiv on: 24 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces novel algorithms to mitigate a universal problem in state-of-the-art language models. These models are autoregressive and operate on subword units called tokens. However, popular encoding schemes like maximum prefix encoding (MPE) and byte-pair-encoding (BPE) induce a sampling bias that cannot be overcome by training or data. The proposed methods do not require fine-tuning the model and scale linearly with sequence length for MPE. This allows simulating token-free behavior from tokenized language models. The correctness of the method is verified through a Markov-chain setup, accurately recovering transition probabilities.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper finds a way to make language models better. These models use tiny pieces called tokens to predict what comes next. However, the way we prepare these tokens can cause problems that can’t be solved by having more training or data. The researchers suggest new ways to fix this issue without changing the model itself. This makes it possible to behave like a token-free language model even if you’re only working with tokens. The method was tested and shown to work correctly in a special kind of experiment.

Keywords

* Artificial intelligence  * Autoregressive  * Fine tuning  * Language model  * Token