Loading Now

Summary of Improving Non-autoregressive Machine Translation with Error Exposure and Consistency Regularization, by Xinran Chen et al.


Improving Non-autoregressive Machine Translation with Error Exposure and Consistency Regularization

by Xinran Chen, Sufeng Duan, Gongshen Liu

First submitted to arxiv on: 15 Feb 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The Conditional Masked Language Model (CMLM) is an IR-NAT framework that uses the mask-predict paradigm to re-predict low-confidence tokens. However, it suffers from data distribution discrepancies between training and inference. To address this issue, the authors propose a training approach using error exposure and consistency regularization (EECR). They construct mixed sequences based on model predictions during training and optimize masked tokens under imperfect observation conditions. The authors also design a consistency learning method to constrain the data distribution for masked tokens under different observing situations. Experimental results on five translation benchmarks show an average improvement of 0.68 and 0.40 BLEU scores compared to base models, with CMLMC-EECR achieving the best performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
The Conditional Masked Language Model is a new way to improve language translation. It uses a special method to predict words that are hard to guess. But it has a problem: it doesn’t work as well when it’s used in real-life situations. To fix this, researchers came up with two new ways to train the model: error exposure and consistency regularization. They also created a way to make sure the model is learning from the right kind of data. The results show that this new method can improve language translation by 0.68 BLEU scores on average.

Keywords

» Artificial intelligence  » Bleu  » Inference  » Mask  » Masked language model  » Regularization  » Translation