Loading Now

Summary of A Multi-modal Approach to Dysarthria Detection and Severity Assessment Using Speech and Text Information, by M Anuprabha et al.


A Multi-modal Approach to Dysarthria Detection and Severity Assessment Using Speech and Text Information

by M Anuprabha, Krishna Gurugubelli, V Kesavaraj, Anil Kumar Vuppala

First submitted to arxiv on: 22 Dec 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Audio and Speech Processing (eess.AS)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper introduces a novel approach to detecting and assessing the severity of dysarthria, a speech disorder. The method leverages both speech and text modalities using a cross-attention mechanism that learns acoustic and linguistic similarities between them. This allows for more accurate detection and assessment of pronunciation deviations across different severity levels. The experiments were performed on the UA-Speech dysarthric database, achieving improved accuracies of 99.53% and 93.20% in detection, and 98.12% and 51.97% for severity assessment in various settings.
Low GrooveSquid.com (original content) Low Difficulty Summary
Dysarthria is a speech disorder that affects people’s ability to speak clearly. Doctors need to diagnose it accurately so they can give the right treatment. This paper develops a new way to detect and assess dysarthria using both spoken words and written text. It’s like comparing apples and oranges – but in this case, it helps doctors figure out how well someone speaks based on their pronunciation. The method is very accurate and could lead to better diagnoses.

Keywords

» Artificial intelligence  » Cross attention