Loading Now

Summary of Large Language Model Based Generative Error Correction: a Challenge and Baselines For Speech Recognition, Speaker Tagging, and Emotion Recognition, by Chao-han Huck Yang et al.


Large Language Model Based Generative Error Correction: A Challenge and Baselines for Speech Recognition, Speaker Tagging, and Emotion Recognition

by Chao-Han Huck Yang, Taejin Park, Yuan Gong, Yuanchao Li, Zhehuai Chen, Yen-Ting Lin, Chen Chen, Yuchen Hu, Kunal Dhawan, Piotr Żelasko, Chao Zhang, Yun-Nung Chen, Yu Tsao, Jagadeesh Balam, Boris Ginsburg, Sabato Marco Siniscalchi, Eng Siong Chng, Peter Bell, Catherine Lai, Shinji Watanabe, Andreas Stolcke

First submitted to arxiv on: 15 Sep 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Sound (cs.SD); Audio and Speech Processing (eess.AS)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A generative AI technology study explores the potential of large language models (LLMs) to enhance acoustic modeling tasks using text decoding results from a frozen, pre-trained automatic speech recognition (ASR) model. The research introduces the GenSEC challenge, comprising three post-ASR language modeling tasks: transcription correction, speaker tagging, and emotion recognition. These tasks aim to emulate future LLM-based agents handling voice-based interfaces while utilizing open pretrained language models or agent-based APIs. Baseline evaluations provide insights and lessons learned for designing future evaluations.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models can improve speech processing by using text decoding results from a pre-trained automatic speech recognition model. Scientists created the GenSEC challenge to test new capabilities in language modeling for speech processing. This challenge includes three tasks: correcting transcription errors, identifying speakers, and recognizing emotions. These tasks help imagine future AI systems that handle voice-based interfaces.

Keywords

* Artificial intelligence