Summary of Mmscan: a Multi-modal 3d Scene Dataset with Hierarchical Grounded Language Annotations, by Ruiyuan Lyu et al.
MMScan: A Multi-Modal 3D Scene Dataset with Hierarchical Grounded Language Annotations
by Ruiyuan Lyu, Tai Wang, Jingli Lin, Shuai Yang, Xiaohan Mao, Yilun Chen, Runsen Xu, Haifeng Huang, Chenming Zhu, Dahua Lin, Jiangmiao Pang
First submitted to arxiv on: 13 Jun 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Robotics (cs.RO)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper addresses the challenge of multi-modal 3D perception by building the first large-scale dataset and benchmark, MMScan, which combines hierarchical grounded language annotations with visual modalities. The dataset is constructed based on a top-down logic, covering spatial and attribute understanding, and includes over 1.4M meta-annotated captions, 109k objects, and 7.7k regions. The paper evaluates representative baselines on the benchmarks, analyzes their capabilities, and highlights key problems to be addressed in the future. Additionally, it uses this high-quality dataset to train state-of-the-art 3D visual grounding and language models, achieving remarkable performance improvement. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research creates a massive database that helps computers better understand the world around us. Imagine being able to teach a computer to recognize objects and their relationships in a room or scene. That’s what this paper is all about. It makes a big dataset with lots of information about 3D scenes, including what things are, where they are, and how they relate to each other. This will help computers get better at understanding our world and can even be used for things like virtual assistants or self-driving cars. |
Keywords
» Artificial intelligence » Grounding » Multi modal