Loading Now

Summary of How Can We Effectively Expand the Vocabulary Of Llms with 0.01gb Of Target Language Text?, by Atsuki Yamaguchi et al.


How Can We Effectively Expand the Vocabulary of LLMs with 0.01GB of Target Language Text?

by Atsuki Yamaguchi, Aline Villavicencio, Nikolaos Aletras

First submitted to arxiv on: 17 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper explores ways to improve large language models (LLMs) in low-resource settings by adapting them to generate non-English text more efficiently. Current LLMs rely on English-centric tokenizers and vocabulary, leading to higher usage costs for non-English speakers. The authors focus on expanding the model’s vocabulary using target language tokens, a method that has shown promise in high-resource settings. They investigate various strategies for initializing embeddings and pre-training models with limited data (just 30K sentences or 0.01GB text data). Their extensive experiments across languages, tasks, and models demonstrate effective approaches to adapting LLMs for faster inference while maintaining competitive performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper helps make big language models better in areas where there isn’t much information. Right now, these models are really good at understanding English but struggle with other languages because they were trained using only English words. This makes it harder and more expensive to use them for non-English text. The authors try to solve this problem by adding new words to the model’s vocabulary that match the language it’s trying to understand. They test different ways of doing this and find some approaches work really well, even with just a small amount of information from the target language.

Keywords

» Artificial intelligence  » Inference