Loading Now

Summary of Co-guiding For Multi-intent Spoken Language Understanding, by Bowen Xing and Ivor W. Tsang


Co-guiding for Multi-intent Spoken Language Understanding

by Bowen Xing, Ivor W. Tsang

First submitted to arxiv on: 22 Nov 2023

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The Co-guiding Net is a novel graph-based model designed to tackle multi-intent spoken language understanding (SLU) tasks. The existing methods focus solely on unidirectional guidance from intent to slot, whereas this paper proposes a two-stage framework that captures bidirectional inter-correlations between intent and slot. The first stage generates initial labels for both tasks, which are then leveraged in the second stage to model mutual guidances. Two heterogeneous graph attention networks operate on proposed semantics label graphs, effectively representing relations among nodes. Additionally, Co-guiding-SCL Net exploits contrastive learning for single-task and dual-task semantics, considering mutual guidances. Experimental results demonstrate significant improvements over existing models, with a 21.3% relative improvement in overall accuracy on the MixATIS dataset.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper creates a new model called Co-guiding Net to help computers understand what people are saying. Right now, computers can only understand one part of what someone is saying at a time. This model lets them understand multiple parts at once by looking at how words relate to each other. The authors also created another tool that helps the computer learn from its mistakes and get better over time. They tested their model on different languages and showed it can understand speech better than previous models.

Keywords

* Artificial intelligence  * Attention  * Language understanding  * Semantics