Loading Now

Summary of Indicsenteval: How Effectively Do Multilingual Transformer Models Encode Linguistic Properties For Indic Languages?, by Akhilesh Aravapalli et al.


IndicSentEval: How Effectively do Multilingual Transformer Models encode Linguistic Properties for Indic Languages?

by Akhilesh Aravapalli, Mounika Marreddy, Subba Reddy Oota, Radhika Mamidi, Manish Gupta

First submitted to arxiv on: 3 Oct 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the encoding capabilities and robustness of 9 multilingual Transformer models (7 universal and 2 Indic-specific) across 8 linguistic properties, 13 perturbations, and 6 Indic languages. The researchers introduce a novel benchmark dataset, IndicSentEval, containing around 47K sentences. Their probing analysis reveals that while universal models generally perform well for English, they show mixed results for Indic languages, with Indic-specific models capturing linguistic properties better. Notably, universal models exhibit better robustness compared to Indic-specific models under certain perturbations, such as dropping nouns and verbs or keeping only nouns. The study provides valuable insights into the strengths and weaknesses of popular multilingual Transformer-based models for different Indic languages.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how well some AI language models work with Indian languages. It wants to know which features these models learn from text, and if they can handle changes in the text that might happen when people write or speak. The researchers test 9 of these models on 8 different types of information (like grammar or meaning) across 6 Indian languages. They create a new dataset with over 47,000 sentences to do this testing. Surprisingly, the models don’t all work equally well for each language – some are better than others! The study helps us understand how these AI models work and what they can do (and can’t do) for different Indian languages.

Keywords

» Artificial intelligence  » Transformer