Loading Now

Summary of Contact-aware Human Motion Generation From Textual Descriptions, by Sihan Ma et al.


Contact-aware Human Motion Generation from Textual Descriptions

by Sihan Ma, Qiong Cao, Jing Zhang, Dacheng Tao

First submitted to arxiv on: 23 Mar 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper tackles the challenge of generating 3D interactive human motion from text. The task is difficult due to the lack of consideration for interactions between body parts and static objects in both motion and textual descriptions, leading to unnatural sequences. To address this, the authors create a novel dataset called RICH-CAT, which includes high-quality motion, accurate contact labels, and detailed textual descriptions across 26 indoor/outdoor actions. The proposed approach, CATMO, integrates human body contacts as evidence using VQ-VAE models and an intertwined GPT for generating motions and contacts. A pre-trained text encoder learns embeddings that discriminate among contact types, allowing for precise control over synthesized motions. Experimental results show the superior performance of CATMO compared to existing methods, producing stable, contact-aware motion sequences.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about using words to create 3D animations of people moving. Right now, it’s hard to make these animations look natural because we don’t have good data that shows how people move and interact with objects. To fix this, the authors created a special dataset called RICH-CAT that includes lots of examples of people moving and interacting with things in different environments. They then developed a new way to use computers to create these animations based on text descriptions. This method is better than other approaches because it takes into account how people actually move and interact with objects, making the animations look more realistic.

Keywords

* Artificial intelligence  * Encoder  * Gpt