Loading Now

Summary of Self-supervised Speech Representations Are More Phonetic Than Semantic, by Kwanghee Choi et al.


Self-Supervised Speech Representations are More Phonetic than Semantic

by Kwanghee Choi, Ankita Pasad, Tomohiko Nakamura, Satoru Fukayama, Karen Livescu, Shinji Watanabe

First submitted to arxiv on: 12 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG); Audio and Speech Processing (eess.AS)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Self-supervised speech models (S3Ms) have become a popular choice for speech applications, encoding linguistic properties in their word representations. This study delves deeper into the linguistic properties encoded by S3Ms at the word level, curating a dataset of near homophone and synonym word pairs to measure similarities between S3M word representation pairs. The results show that S3M representations consistently exhibit more phonetic than semantic similarity. Furthermore, this work questions whether widely used intent classification datasets, such as Fluent Speech Commands and Snips Smartlights, are adequate for measuring semantic abilities. A simple baseline using only the word identity surpasses S3M-based models, suggesting that high scores on these datasets do not necessarily guarantee the presence of semantic content.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study looks at how speech models work with words. It finds out what features they pick up from similar-sounding and meaning words. The results show that these models are better at recognizing sounds than meanings. This is important because some datasets used to measure language skills might not be good measures after all. A simple way of doing things actually works better than using the fancy speech models.

Keywords

» Artificial intelligence  » Classification  » Self supervised