Loading Now

Summary of Lidar-based End-to-end Temporal Perception For Vehicle-infrastructure Cooperation, by Zhenwei Yang et al.


LiDAR-based End-to-end Temporal Perception for Vehicle-Infrastructure Cooperation

by Zhenwei Yang, Jilei Mao, Wenxian Yang, Yibo Ai, Yu Kong, Haibao Yu, Weidong Zhang

First submitted to arxiv on: 22 Nov 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Robotics (cs.RO)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Temporal perception is crucial in autonomous driving, enabling the detection and tracking of objects over time to maintain a comprehensive understanding of dynamic environments. However, incomplete perception due to occluded objects and observational blind spots hinders this task. To address these challenges, we introduce LET-VIC, a LiDAR-based End-to-End Tracking framework for Vehicle-Infrastructure Cooperation (VIC). LET-VIC leverages V2X communication to enhance temporal perception by fusing spatial and temporal data from vehicle and infrastructure sensors. It integrates Bird’s Eye View features from LiDAR data to mitigate occlusions and blind spots, incorporates temporal context across frames for enhanced tracking stability and accuracy, and includes a Calibration Error Compensation module to address sensor misalignments. LET-VIC significantly outperforms baseline models on the V2X-Seq-SPD dataset, achieving at least 13.7% improvement in mAP and 13.1% improvement in AMOTA without considering communication delays.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine trying to drive a car while simultaneously keeping track of all the objects around you, including other cars, pedestrians, and buildings. This is called temporal perception, and it’s really important for self-driving cars to be able to do it well. The problem is that sometimes things can get hidden or out of view, making it hard to keep track of everything. To solve this problem, scientists created a new system called LET-VIC that uses special sensors and communication systems to help self-driving cars understand their environment better. This system is really good at finding and tracking objects, even when they’re partially hidden or moving quickly. It’s an important step towards making self-driving cars safer and more reliable.

Keywords

» Artificial intelligence  » Tracking