Summary of Paramanu: a Family Of Novel Efficient Generative Foundation Language Models For Indian Languages, by Mitodru Niyogi and Arnab Bhattacharya
Paramanu: A Family of Novel Efficient Generative Foundation Language Models for Indian Languages
by Mitodru Niyogi, Arnab Bhattacharya
First submitted to arxiv on: 31 Jan 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The “Paramanu” family of novel language models is introduced for Indian languages, consisting of monolingual, bilingual, and multilingual models pre-trained from scratch. The models cover 10 languages across 5 scripts, with varying sizes ranging from 13.29 million to 367.5 million parameters. A RoPE embedding scaling method enables pre-training at larger sequence lengths than typical GPU memory permits. An efficient Indic tokenizer, “mBharat”, is proposed using a combination of BPE and Unigram, achieving the least fertility score and ability to tokenize unseen languages in both scripts. The models are tested for language transfer from low-resource to high-resource languages within the same script and typology, demonstrating a phenomenon. Human evaluations show that Paramanu models outperform several large language models (LLMs), despite being smaller by 20-64 times. Instruction-tuning datasets were created, and instruction-tuned models performed well on natural language understanding, inference, and reading comprehension benchmarks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Paramanu is a new way to understand Indian languages using computers. It’s like a special kind of AI that can learn and remember lots of words and phrases in 10 different languages! The team used special techniques to make it work efficiently, even on regular computers. They also created a tool called “mBharat” that helps the model understand different scripts and languages. When they tested it, they found that it could actually teach other models new things, which is cool! It’s like learning from someone who knows lots of words and phrases. The team also showed that their model was really good at understanding what people meant when they wrote something, even if the language was different. |
Keywords
» Artificial intelligence » Embedding » Inference » Instruction tuning » Language understanding » Tokenizer