Summary of Emma: End-to-end Multimodal Model For Autonomous Driving, by Jyh-jing Hwang et al.
EMMA: End-to-End Multimodal Model for Autonomous Driving
by Jyh-Jing Hwang, Runsheng Xu, Hubert Lin, Wei-Chih Hung, Jingwei Ji, Kristy Choi, Di Huang, Tong He, Paul Covington, Benjamin Sapp, Yin Zhou, James Guo, Dragomir Anguelov, Mingxing Tan
First submitted to arxiv on: 30 Oct 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG); Robotics (cs.RO)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary EMMA, an End-to-End Multimodal Model for Autonomous Driving, directly maps raw camera sensor data into various driving-specific outputs, including planner trajectories, perception objects, and road graph elements. Built on a multi-modal large language model foundation, EMMA represents all non-sensor inputs and outputs as natural language text, allowing it to jointly process various driving tasks in a unified language space. Empirically, EMMA achieves state-of-the-art performance in motion planning on nuScenes and competitive results on the Waymo Open Motion Dataset (WOMD) and Waymo Open Dataset (WOD). Co-training EMMA with planner trajectories, object detection, and road graph tasks yields improvements across all three domains. However, EMMA has limitations, including processing a small amount of image frames, lacking accurate 3D sensing modalities like LiDAR or radar, and being computationally expensive. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary EMMA is a new way for cars to drive themselves. It uses a special kind of AI that can understand both pictures and words. This helps the car make decisions about where to go and what to do. EMMA is really good at planning routes and recognizing objects on the road. In tests, it did better than other models in some areas and just as well in others. The researchers think this could be a big step forward for self-driving cars, but there are still some problems they need to fix. |
Keywords
» Artificial intelligence » Large language model » Multi modal » Object detection