Summary of Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models, by Seungduk Kim et al.
Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models
by Seungduk Kim, Seungtaek Choi, Myeongho Jeong
First submitted to arxiv on: 22 Feb 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper introduces EEVE-Korean-v1.0, a Korean adaptation of large language models that excel in processing both English and Korean texts. Building upon recent English-centric LLMs, such as SOLAR-10.7B and Phi-2, the authors propose an efficient vocabulary expansion method called EEVE, which combines parameter freezing and subword initialization. Unlike previous efforts requiring trillions of training tokens, EEVE can significantly improve non-English proficiency within just 2 billion tokens. The resulting model, EEVE-Korean-10.8B-v1.0, surpasses most instruction-tuned LLMs on the Open Ko-LLM Leaderboard and ranks as the top Korean pre-trained model in the open-source community. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper makes a big discovery! They create a new language model that can understand both English and Korean really well. This is special because most language models are only good at understanding one or the other. The team used something called EEVE to make their model better, which helps it learn from smaller amounts of data. Their model is now the best one for Korean out there, and they’re sharing it with others so everyone can use it. |
Keywords
» Artificial intelligence » Language model