Loading Now

Summary of Sttatts: Unified Speech-to-text and Text-to-speech Model, by Hawau Olamide Toyin et al.


STTATTS: Unified Speech-To-Text And Text-To-Speech Model

by Hawau Olamide Toyin, Hao Li, Hanan Aldarmaki

First submitted to arxiv on: 24 Oct 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Sound (cs.SD); Audio and Speech Processing (eess.AS)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach to joint learning of Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) models, leveraging multi-task learning objectives and shared parameters. The authors demonstrate that their joint model achieves comparable performance to individually trained models while reducing computational and memory costs by approximately 50%. The evaluation is conducted on both English and Arabic languages, highlighting the model’s versatility in handling resource-rich and low-resource languages.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about finding a new way to train computers to understand and generate speech. Usually, these tasks are done separately, but researchers have found that they can be combined into one process. This new approach saves computer power and memory while still getting good results. The experiment was tested on two languages: English, which has lots of data available, and Arabic, which doesn’t have as much data. The team made the training code and model public so other researchers can use it to improve speech recognition and synthesis.

Keywords

» Artificial intelligence  » Multi task