Summary of Self-taught Recognizer: Toward Unsupervised Adaptation For Speech Foundation Models, by Yuchen Hu et al.
Self-Taught Recognizer: Toward Unsupervised Adaptation for Speech Foundation Models
by Yuchen Hu, Chen Chen, Chao-Han Huck Yang, Chengwei Qin, Pin-Yu Chen, Eng Siong Chng, Chao Zhang
First submitted to arxiv on: 23 May 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Sound (cs.SD); Audio and Speech Processing (eess.AS)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed Self-TAught Recognizer (STAR) framework leverages unlabeled data to enhance the robustness of automatic speech recognition (ASR) systems in diverse target domains, such as noise and accents. STAR is developed for prevalent speech foundation models based on Transformer-related architecture with auto-regressive decoding, like Whisper and Canary. The framework proposes a novel indicator that integrates step-wise information during decoding to assess token-level quality without ground truth, guiding model updates for effective unsupervised adaptation. Experimental results show STAR achieves an average 13.5% relative reduction in word error rate across 14 target domains, sometimes approaching the upper-bound performance of supervised adaptation. STAR also prevents catastrophic forgetting and exhibits high data efficiency, requiring less than one-hour unlabeled data. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary STAR is a new way to make automatic speech recognition (ASR) better at understanding different voices, even when they’re in noisy environments or have strong accents. It uses special models called foundation models that are based on something called Transformers. These models help the ASR system learn from lots of unlabeled data, which means it can get smarter without needing a lot of labeled data to train on. The new framework does this by looking at how well the model is doing at each step and adjusting its learning process accordingly. This helps the model avoid “forgetting” what it learned earlier and makes it more efficient with its data usage. |
Keywords
» Artificial intelligence » Supervised » Token » Transformer » Unsupervised