Summary of Low-resource Speech Recognition and Dialect Identification Of Irish in a Multi-task Framework, by Liam Lonergan et al.
Low-resource speech recognition and dialect identification of Irish in a multi-task framework
by Liam Lonergan, Mengjie Qian, Neasa Ní Chiaráin, Christer Gobl, Ailbhe Ní Chasaide
First submitted to arxiv on: 2 May 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Sound (cs.SD); Audio and Speech Processing (eess.AS)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper explores the use of Hybrid CTC/Attention encoder-decoder models trained with Intermediate CTC (InterCTC) for Irish low-resource speech recognition (ASR) and dialect identification (DID). The results are compared to the current best performing models trained for ASR (TDNN-HMM) and DID (ECAPA-TDNN). The paper establishes an optimal InterCTC setting using a Conformer encoder and trains a model with an E-branchformer encoder. The performance of both architectures is compared, and a multi-task fine-tuning approach is adopted for language model shallow fusion. The experiments yield an improvement in DID accuracy of 10.8% relative to the baseline ECAPA-TDNN, and WER performance approaching the TDNN-HMM model. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at using special computer models to understand Irish speech better. It compares these models to others that are already good at this task. The researchers find a way to make their model work well by using different parts of it. They also test combining language learning with understanding spoken words, which helps improve the results. This is important because there isn’t much data available for Irish speech recognition and dialect identification. |
Keywords
» Artificial intelligence » Attention » Encoder » Encoder decoder » Fine tuning » Language model » Multi task