Summary of Lost in the Source Language: How Large Language Models Evaluate the Quality Of Machine Translation, by Xu Huang et al.
Lost in the Source Language: How Large Language Models Evaluate the Quality of Machine Translation
by Xu Huang, Zhirui Zhang, Xiang Geng, Yichao Du, Jiajun Chen, Shujian Huang
First submitted to arxiv on: 12 Jan 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This study examines how Large Language Models (LLMs) utilize source and reference data in machine translation evaluation, aiming to understand their exceptional performance. The researchers design controlled experiments across various input modes and model types, using both coarse-grained and fine-grained prompts to determine the effectiveness of source versus reference information. They find that reference information significantly improves evaluation accuracy, while surprisingly, source information can sometimes hinder performance, indicating LLMs’ limitations in leveraging cross-lingual capabilities during translation evaluation. The study’s findings also suggest potential directions for improving LLMs’ performance in machine translation evaluation tasks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This study looks at how Large Language Models use data to evaluate translations. Researchers did experiments with different types of input and model types, using simple or detailed prompts to see what works best. They found that using reference information makes the evaluations more accurate, but sometimes using source information can make things worse. This shows that LLMs have trouble fully using their ability to translate between languages when evaluating translations. |
Keywords
* Artificial intelligence * Translation