Loading Now

Summary of Synthetic Data Generation and Joint Learning For Robust Code-mixed Translation, by Kartik Kartik et al.


Synthetic Data Generation and Joint Learning for Robust Code-Mixed Translation

by Kartik Kartik, Sanjana Soni, Anoop Kunchukuttan, Tanmoy Chakraborty, Md Shad Akhtar

First submitted to arxiv on: 25 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed solution tackles the problem of code-mixed (Hinglish and Bengalish) to English machine translation, which is a formidable challenge due to data scarcity and noise. The authors develop HINMIX, a parallel corpus of Hinglish to English, with ~4.2M sentence pairs. They then propose RCMT, a robust perturbation-based joint-training model that learns to handle noise in real-world code-mixed text by parameter sharing across clean and noisy words. The model demonstrates adaptability in a zero-shot setup for Bengalish to English translation. Evaluation and comprehensive analyses show the superiority of RCMT over state-of-the-art code-mixed and robust translation methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about using machine learning to translate text that combines multiple languages, like Hindi and English, into one language, like English. This is a hard problem because there’s not much data available and it can be noisy. The authors create a big dataset of Hindi-to-English text pairs and then make a special model that can handle this kind of noise. They test their model on some new, unseen data and show that it works better than other methods.

Keywords

* Artificial intelligence  * Machine learning  * Translation  * Zero shot