Loading Now

Summary of Challenges in Adapting Multilingual Llms to Low-resource Languages Using Lora Peft Tuning, by Omkar Khade et al.


Challenges in Adapting Multilingual LLMs to Low-Resource Languages using LoRA PEFT Tuning

by Omkar Khade, Shruti Jagdale, Abhishek Phaltankar, Gauri Takalikar, Raviraj Joshi

First submitted to arxiv on: 27 Nov 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel study investigates the impact of Low-Rank Adaptation (LoRA) and Parameter-Efficient Fine-Tuning (PEFT) on Large Language Models (LLMs) for Marathi, a low-resource language. The research utilizes a translated Alpaca dataset to fine-tune Gemma models, observing that while evaluation metrics show performance decline, manual assessments indicate improved target language generation capabilities but reduced reasoning abilities. These findings highlight the need for enhanced evaluation methodologies and high-quality native datasets to accurately assess model performance in low-resource settings.
Low GrooveSquid.com (original content) Low Difficulty Summary
A team of researchers looked at how well Large Language Models (LLMs) work when they’re adapted for a new language, Marathi, which doesn’t have much data available. They used a special dataset that’s been translated into Marathi to see if the models get better or worse after being fine-tuned. The results show that even though some metrics make it seem like the models are doing worse, people looking at the answers think they’re actually getting better at generating text in Marathi. However, they might not be as good at understanding what’s behind the words anymore. This study shows why we need better ways to check how well language models work and more high-quality data for languages that don’t have much help.

Keywords

» Artificial intelligence  » Fine tuning  » Lora  » Low rank adaptation  » Parameter efficient