Summary of Turn-taking and Backchannel Prediction with Acoustic and Large Language Model Fusion, by Jinhan Wang et al.
Turn-taking and Backchannel Prediction with Acoustic and Large Language Model Fusion
by Jinhan Wang, Long Chen, Aparna Khare, Anirudh Raju, Pranav Dheram, Di He, Minhua Wu, Andreas Stolcke, Venkatesh Ravichandran
First submitted to arxiv on: 26 Jan 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Sound (cs.SD); Audio and Speech Processing (eess.AS)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers propose a method for predicting turn-taking and backchanneling locations in spoken dialogue by combining a neural acoustic model with a large language model (LLM). The approach is tested on the Switchboard human-human conversation dataset, where it outperforms baseline models using single modalities. To further improve performance, the authors develop a novel multi-task instruction fine-tuning strategy that leverages LLM-encoded knowledge to understand tasks and conversational contexts. The results demonstrate the potential of combining LLMs and acoustic models for more natural interactions between humans and speech-enabled AI agents. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper shows how computers can better understand spoken conversations by using both what people say (acoustic model) and what they mean (large language model). It’s like having a conversation with someone who really gets you. The researchers tested their method on a big dataset of human conversations and found that it works much better than previous methods. |
Keywords
* Artificial intelligence * Fine tuning * Large language model * Multi task