Summary of Using Large Language Model For End-to-end Chinese Asr and Ner, by Yuang Li et al.
Using Large Language Model for End-to-End Chinese ASR and NER
by Yuang Li, Jiawei Yu, Min Zhang, Mengxin Ren, Yanqing Zhao, Xiaofeng Zhao, Shimin Tao, Jinsong Su, Hao Yang
First submitted to arxiv on: 21 Jan 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper presents an alternative approach to integrating speech modality into large language models, specifically connecting the Whisper encoder with ChatGLM3. The authors compare this encoder-decoder architecture with the decoder-only paradigm using Chinese automatic speech recognition (ASR) and name entity recognition (NER) tasks. They evaluate the approaches using conventional metrics like F1 score as well as a novel fine-grained taxonomy of ASR-NER errors. The results show that the encoder-decoder architecture outperforms the decoder-only approach with short contexts, while the latter benefits from long contexts, leveraging all layers of the large language model. By employing this approach and using chain-of-thought (CoT) NER, which first infers long-form ASR transcriptions and then predicts NER labels, the authors achieve a state-of-the-art F1 score of 0.805 on the AISHELL-NER test set. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper explores how to combine speech and text data in large language models. They compare two ways to do this: one that uses an encoder-decoder architecture, and another that only uses a decoder. The authors use Chinese speech recognition and name entity recognition tasks to test these approaches. They not only look at the usual metrics like accuracy but also try to understand where errors are coming from. The results show that using an encoder-decoder architecture is better for short pieces of audio, while using just a decoder is better for longer pieces. By combining this approach with another technique called chain-of-thought NER, they get even better results. |
Keywords
» Artificial intelligence » Decoder » Encoder » Encoder decoder » F1 score » Large language model » Ner