Loading Now

Summary of Unifiedcrawl: Aggregated Common Crawl For Affordable Adaptation Of Llms on Low-resource Languages, by Bethel Melesse Tessema (1) et al.


UnifiedCrawl: Aggregated Common Crawl for Affordable Adaptation of LLMs on Low-Resource Languages

by Bethel Melesse Tessema, Akhil Kedia, Tae-Sun Chung

First submitted to arxiv on: 21 Nov 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Large language models (LLMs) struggle to perform well on low-resource languages due to limited training data. Our UnifiedCrawl method efficiently collects text data from the Common Crawl corpus, yielding mono-lingual datasets much larger than previously available sources. By fine-tuning multilingual LLMs using efficient adapter methods (QLoRA), we achieve significant performance boosts on low-resource languages while minimizing VRAM usage. Our experiments show large improvements in language modeling perplexity and few-shot prompting scores. We provide an affordable approach to improve LLMs for low-resource languages using consumer hardware, releasing our source code at https://github.com/bethelmelesse/unifiedcrawl. This paper demonstrates the effectiveness of UnifiedCrawl and QLoRA in improving LLM performance on low-resource languages.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models (LLMs) have trouble with languages that don’t have much data. We found a way to collect more text data for these languages by using all of Common Crawl’s data. This helps make the LLMs better at understanding and generating text in these languages. We tested our approach and saw big improvements in how well the LLMs worked. We’re sharing our code so others can use it to improve LLMs for low-resource languages.

Keywords

» Artificial intelligence  » Few shot  » Fine tuning  » Perplexity  » Prompting