Loading Now

Summary of Act-mnmt Auto-constriction Turning For Multilingual Neural Machine Translation, by Shaojie Dai et al.


ACT-MNMT Auto-Constriction Turning for Multilingual Neural Machine Translation

by Shaojie Dai, Xin Liu, Ping Luo, Yue Yu

First submitted to arxiv on: 11 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents a novel approach to address the “off-target” issue in large language model (LLM)-based multilingual machine translation tasks. The LLM’s pre-training on mixed multilingual data leads to misunderstandings, incorrect languages, and over-generation. To overcome this, the authors introduce an Auto-Constriction Turning mechanism for Multilingual Neural Machine Translation (MCMT). This supervised fine-tuning method constructs a constrained template on the target side by adding trigger tokens ahead of the ground truth. Trigger tokens can be freely arranged and updated to maximize label likelihood. The model achieves improved performance across multiple translation directions, reducing off-target phenomena.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps fix a big problem in machine translation. When we use large language models to translate between many languages, they often get confused or produce too much text. To solve this issue, the researchers created a new way to fine-tune these models. They add special tokens ahead of the correct translations to help the model understand what it should be doing. This works really well and improves translation results.

Keywords

» Artificial intelligence  » Fine tuning  » Large language model  » Likelihood  » Supervised  » Translation