Loading Now

Summary of Bailong: Bilingual Transfer Learning Based on Qlora and Zip-tie Embedding, by Lung-chuan Chen and Zong-ru Li


Bailong: Bilingual Transfer Learning based on QLoRA and Zip-tie Embedding

by Lung-Chuan Chen, Zong-Ru Li

First submitted to arxiv on: 1 Apr 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed method combines techniques for efficient multilingual pre-training, fine-tuning, and benchmarking to enhance cross-lingual transfer on English-dominated open-source Large Language Models (LLMs). By leveraging QLoRA and a novel zip-tie embedding initialization, the model is trained on Traditional Chinese data and achieves competitive performance on multi-turn dialogue scenarios. The proposed model, Bailong-instruct 7B, outperforms other open-source models of similar or larger parameter sizes on benchmark datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models can be used for many tasks, but most are only trained on English data. This makes them less good at understanding and generating text in languages with fewer available resources. To fix this, researchers have proposed different methods to make the models work better across languages. One approach is to fine-tune the model using additional data, but this requires a lot of computing power. In this paper, the authors combine several techniques to make it easier for LLMs to learn from other languages. They train the model on Traditional Chinese data and test its performance in multi-turn dialogue scenarios.

Keywords

» Artificial intelligence  » Embedding  » Fine tuning