Loading Now

Summary of Neko: Toward Post Recognition Generative Correction Large Language Models with Task-oriented Experts, by Yen-ting Lin et al.


NeKo: Toward Post Recognition Generative Correction Large Language Models with Task-Oriented Experts

by Yen-Ting Lin, Chao-Han Huck Yang, Zhehuai Chen, Piotr Zelasko, Xuesong Yang, Zih-Ching Chen, Krishna C Puvvada, Szu-Wei Fu, Ke Hu, Jun Wei Chiu, Jagadeesh Balam, Boris Ginsburg, Yu-Chiang Frank Wang

First submitted to arxiv on: 8 Nov 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Multiagent Systems (cs.MA); Audio and Speech Processing (eess.AS)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents a novel approach to constructing general-purpose post-recognition error correctors, which is crucial for effectively training models on large mixtures of domain datasets. The proposed solution, Mixture-of-Experts (MoE), is more than just a scalability tool and can be used to learn dataset-specific features and digest their knowledge in a single model. The authors propose a Multi-Task Correction MoE, where experts are trained to become proficient in specific domains such as speech-to-text, language-to-text, and vision-to-text datasets. The approach is evaluated on the Open ASR Leaderboard, achieving an average relative 5.0% WER reduction and significant improvements in BLEU scores for speech and translation tasks. Additionally, the model outperforms GPT-3.5 and Claude-Opus on zero-shot evaluation in the Hyporadise benchmark.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making machines better at fixing mistakes they make when understanding text or speech. Currently, different models are trained separately for each type of task, which can be inefficient. The authors propose a new way to train one model that can do multiple tasks well, called Mixture-of-Experts (MoE). They show that this approach can reduce errors by 5% and improve language translation scores. This is an important step towards making machines more accurate and helpful.

Keywords

» Artificial intelligence  » Bleu  » Claude  » Gpt  » Mixture of experts  » Multi task  » Translation  » Zero shot