Summary of Abstractive Summarization Of Low Resourced Nepali Language Using Multilingual Transformers, by Prakash Dhakal et al.
Abstractive Summarization of Low resourced Nepali language using Multilingual Transformers
by Prakash Dhakal, Daya Sagar Baral
First submitted to arxiv on: 29 Sep 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The study explores abstractive summarization in Nepali language using multilingual transformer models like mBART and mT5 to generate headlines for Nepali news articles. The research addresses challenges associated with summarizing texts in Nepali by creating a dataset through web scraping from various Nepali news portals. The fine-tuned models were assessed using ROUGE scores and human evaluation to ensure the generated summaries were coherent and conveyed the original meaning. The study found that the 4-bit quantized mBART with LoRA model was effective in generating better Nepali news headlines, outperforming other models. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research looks at a new way of making headlines for news articles in Nepal using special AI models. It’s hard to make summaries in Nepali because there isn’t much data or research on it. The researchers made their own dataset by collecting information from Nepali news websites and then used AI models to fine-tune them. They tested the models to see how well they worked, and found that one model did a great job of making headlines that were helpful and accurate. |
Keywords
» Artificial intelligence » Lora » Rouge » Summarization » Transformer