Loading Now

Summary of Images That Sound: Composing Images and Sounds on a Single Canvas, by Ziyang Chen et al.


Images that Sound: Composing Images and Sounds on a Single Canvas

by Ziyang Chen, Daniel Geng, Andrew Owens

First submitted to arxiv on: 20 May 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG); Multimedia (cs.MM); Sound (cs.SD); Audio and Speech Processing (eess.AS)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers explore the intersection of natural images and audio by creating spectrograms that simultaneously resemble natural images and sound like natural audio. The team leverages pre-trained text-to-image and text-to-spectrogram diffusion models to generate these “visual spectrograms” that align with a desired audio prompt while also taking the visual appearance of a desired image prompt. The approach is zero-shot, meaning it doesn’t require any additional training data. The study evaluates the method’s performance through quantitative metrics and human perceptual studies.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine you have a favorite song and a beautiful landscape photo. This paper shows that it’s possible to create a visual representation of sound (called a spectrogram) that looks like the photo and sounds like your song! The researchers used special computer models to make this happen, without needing any extra training data. They tested their method by showing people the generated spectrograms and asking if they looked and sounded natural. The results are impressive!

Keywords

» Artificial intelligence  » Diffusion  » Prompt  » Zero shot