Loading Now

Summary of Not My Voice! a Taxonomy Of Ethical and Safety Harms Of Speech Generators, by Wiebke Hutiri et al.


Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators

by Wiebke Hutiri, Oresiti Papakyriakopoulos, Alice Xiang

First submitted to arxiv on: 25 Jan 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Computers and Society (cs.CY); Audio and Speech Processing (eess.AS)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the ethical and safety risks associated with the rapid adoption of artificial intelligence (AI) to generate human-like speech. The authors analyze incidents where AI-generated voices have been used in swatting attacks, demonstrating that these risks are not isolated but arise from complex interactions between stakeholders and technical systems. The study finds that specific harms can be categorized based on the exposure of affected individuals or the motives of those creating and deploying the systems. Building on this understanding, the authors propose a framework for modeling pathways to ethical and safety harms in AI systems and develop a taxonomy of harms specifically related to speech generators. This research aims to inform policy interventions and decision-making for responsible AI development.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how artificial intelligence (AI) is changing the way we communicate with each other. Sometimes, AI makes fake voices that can be used to do mean things, like trick police officers or scare people. The authors of this study looked at some examples of when these fake voices were used in bad ways and found patterns. They realized that there are different kinds of harm that can happen depending on who is affected by the fake voices and why they’re being used. This helps us understand how we should make sure AI technology is safe and fair for everyone.

Keywords

» Artificial intelligence