Loading Now

Summary of Continuous Sign Language Recognition Using Intra-inter Gloss Attention, by Hossein Ranjbar et al.


Continuous Sign Language Recognition Using Intra-inter Gloss Attention

by Hossein Ranjbar, Alireza Taheri

First submitted to arxiv on: 26 Jun 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Many studies in continuous sign language recognition (CSLR) have employed transformer-based architectures for sequence modeling due to their ability to capture global contexts. However, vanilla self-attention may not fully exploit local temporal semantics in sign videos. To address this, we propose the intra-inter gloss attention module, which leverages relationships among frames within glosses and dependencies between glosses. The intra-gloss attention module applies localized self-attention within each video chunk to reduce complexity and eliminate noise. The inter-gloss attention module aggregates chunk-level features using average pooling and multi-head self-attention. We remove background noise using segmentation, enabling the model to focus on the signer. Experimental results on the PHOENIX-2014 benchmark dataset demonstrate improved accuracy and a competitive word error rate (WER) of 20.4.
Low GrooveSquid.com (original content) Low Difficulty Summary
Sign language recognition is important for communication between people with hearing impairments. Researchers have used transformer-based models to improve sign language recognition, but these models may not capture local details in videos. A new module called intra-inter gloss attention has been developed to help capture these details. This module looks at the relationships within and between different parts of sign language videos. It also removes background noise from the videos, so the model can focus on the person signing. The results show that this method can improve sign language recognition accuracy and is competitive with other methods.

Keywords

» Artificial intelligence  » Attention  » Self attention  » Semantics  » Transformer