Summary of Improving Multilingual Instruction Finetuning Via Linguistically Natural and Diverse Datasets, by Sathish Reddy Indurthi et al.
Improving Multilingual Instruction Finetuning via Linguistically Natural and Diverse Datasets
by Sathish Reddy Indurthi, Wenxuan Zhou, Shamil Chollampatt, Ravi Agrawal, Kaiqiang Song, Lingxiao Zhao, Chenguang Zhu
First submitted to arxiv on: 1 Jul 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Medium Difficulty summary: The paper proposes a novel method for creating multilingual Instruction Fine-Tuning (IFT) datasets that preserve linguistic naturalness and ensure prompt diversity. Building upon Large Language Models (LLMs), the approach combines English-focused LLMs, monolingual corpora, and a scoring function to generate high-quality IFT datasets in multiple languages. Experimental results demonstrate notable improvements in generative and discriminative tasks for LLMs fine-tuned using these IFT datasets, indicating enhanced language comprehension in non-English contexts. Specifically, the proposed method achieves 17.57% and 15.23% improvements over translation-based and template-based datasets, respectively, on the multilingual summarization task. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Low Difficulty summary: This paper helps computers understand instructions written in different languages. Currently, most computer models are trained using English instructions, which limits their ability to follow instructions in other languages. The researchers developed a new method to create instruction-following datasets in many languages that sounds natural and has diverse prompts. They tested this approach with large language models and found significant improvements in tasks such as summarization when the models were fine-tuned using these new datasets. |
Keywords
» Artificial intelligence » Fine tuning » Prompt » Summarization » Translation