Loading Now

Summary of Bilingual Adaptation Of Monolingual Foundation Models, by Gurpreet Gosal et al.


Bilingual Adaptation of Monolingual Foundation Models

by Gurpreet Gosal, Yishi Xu, Gokul Ramakrishnan, Rituraj Joshi, Avraham Sheinin, Zhiming, Chen, Biswajit Mishra, Natalia Vassilieva, Joel Hestness, Neha Sengupta, Sunil Kumar Sahu, Bokang Jia, Onkar Pandit, Satheesh Katipomu, Samta Kamboj, Samujjwal Ghosh, Rahul Pal, Parvez Mullah, Soundar Doraiswamy, Mohamed El Karim Chami, Preslav Nakov

First submitted to arxiv on: 13 Jul 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed method efficiently adapts Large Language Models (LLMs) to another language, tackling catastrophic forgetting and tokenizer limitations. The two-stage approach starts with vocabulary expansion and embedding matrix training, followed by full model pre-training on a bilingual corpus. This results in significant improvements for the target language and slight enhancements for the original language. Ablation studies are performed to evaluate the impact of various techniques and learning rates.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study shows how to make a large language model work well in a new language, like Arabic. The method has two parts: first, it adds words and trains just the part that helps understand words, then it fully trains the model on texts from both languages. This makes the model good at understanding Arabic and a little better at English. To make sure this approach works for other languages too, the researchers also tried adapting models to Hindi.

Keywords

» Artificial intelligence  » Embedding  » Large language model  » Tokenizer