Summary of Multiply: a Multisensory Object-centric Embodied Large Language Model in 3d World, by Yining Hong et al.
MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World
by Yining Hong, Zishuo Zheng, Peihao Chen, Yian Wang, Junyan Li, Chuang Gan
First submitted to arxiv on: 16 Jan 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG); Robotics (cs.RO)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed MultiPLY model is a multisensory embodied large language model that can actively interact with objects in a 3D environment and collect multisensory information. It incorporates visual, audio, tactile, and thermal cues into large language models, establishing correlations among words, actions, and percepts. The model is trained on the Multisensory Universe dataset, which consists of 500k interactions between an LLM-powered embodied agent and a 3D environment. MultiPLY outperforms baselines in various embodied tasks, including object retrieval, tool use, multisensory captioning, and task decomposition. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary MultiPLY is a new type of AI model that can learn from our senses and actions. It’s like having a super smart robot that can see, hear, touch, and feel things just like we do! The model is trained on lots of data about people interacting with the world around them. This helps it understand how words are connected to what we do and see. MultiPLY is really good at doing tasks that involve using objects, tools, and even writing descriptions of what’s happening. |
Keywords
* Artificial intelligence * Large language model