Summary of Augmenting Automatic Speech Recognition Models with Disfluency Detection, by Robin Amann et al.
Augmenting Automatic Speech Recognition Models with Disfluency Detection
by Robin Amann, Zhaolin Li, Barbara Bruno, Jan Niehues
First submitted to arxiv on: 16 Sep 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers tackle the challenge of accurately recognizing speech disfluency, a common phenomenon in everyday conversation. Standard Automatic Speech Recognition (ASR) models are not well-suited for this task since they’re typically trained on fluent transcripts. Current approaches focus on detecting disfluencies within transcripts but neglect their precise location and duration in the speech. Moreover, previous work often requires model fine-tuning and addresses only limited types of disfluencies. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about fixing a problem with machines that can recognize spoken words. Right now, these machines don’t do well when the person speaking makes mistakes or stutters. This happens more often in real-life conversations than in scripted recordings. The researchers are trying to make machines better at recognizing when someone’s speech is not smooth. |
Keywords
» Artificial intelligence » Fine tuning