Summary of Faces That Speak: Jointly Synthesising Talking Face and Speech From Text, by Youngjoon Jang et al.
Faces that Speak: Jointly Synthesising Talking Face and Speech from Text
by Youngjoon Jang, Ji-Hoon Kim, Junseok Ahn, Doyeop Kwak, Hong-Sun Yang, Yoon-Cheol Ju, Il-Hwan Kim, Byeong-Yeol Kim, Joon Son Chung
First submitted to arxiv on: 16 May 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Sound (cs.SD); Audio and Speech Processing (eess.AS); Image and Video Processing (eess.IV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes a unified framework that simultaneously generates natural talking faces and speech outputs from text. The framework combines Talking Face Generation (TFG) and Text-to-Speech (TTS) systems, addressing challenges in generating realistic head poses and voice consistency despite facial motion variations. The authors introduce a motion sampler based on conditional flow matching for efficient high-quality motion code generation and a novel conditioning method for the TTS system to yield uniform speech outputs. The paper presents extensive experiments demonstrating the effectiveness of the method in creating natural-looking talking faces and accurate speech that matches input text. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The researchers created a special computer program that can make videos of people’s faces talk. They used two separate programs, one that makes faces look like they’re talking (Talking Face Generation) and another that turns words into sounds (Text-to-Speech). The big challenge was making the face movements match the sounds, so it looks like the person is really saying what the computer is generating. To solve this problem, they came up with a clever way to generate motion codes quickly and accurately. They also found a way to make the speech sound consistent even when the facial expressions change. The results show that their program can create very natural-looking talking faces and matching audio. |