Loading Now

Summary of Tracking-assisted Object Detection with Event Cameras, by Ting-kang Yen et al.


Tracking-Assisted Object Detection with Event Cameras

by Ting-Kang Yen, Igor Morawski, Shusil Dangi, Kai He, Chung-Yi Lin, Jia-Fong Yeh, Hung-Ting Su, Winston Hsu

First submitted to arxiv on: 27 Mar 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Event-based object detection has gained attention due to event cameras’ exceptional properties, such as high dynamic range and no motion blur. However, feature asynchronism and sparsity cause invisible objects when there is no relative motion to the camera, posing a significant challenge in this task. The paper proposes an explicit-learned memory guided by the tracking objective to record object displacements across frames, improving detection of pseudo-occluded objects. An auto-labeling algorithm is introduced for event camera datasets to append visibility labels and clean existing data. A spatio-temporal feature aggregation module and consistency loss are proposed to increase robustness. Experimental results show a significant improvement in mAP (7.9% absolute) compared to state-of-the-art approaches, demonstrating the effectiveness of the method.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about improving object detection using special cameras that capture events instead of regular images. These event cameras have unique properties like being able to see a wide range of light levels and not having motion blur. However, this makes it hard to detect objects when they’re not moving relative to the camera. The paper proposes a new way to track objects even when they’re not visible for a long time. It also introduces a new algorithm to label data and make it better for training models. The results show that this method is better than previous methods, with an improvement of 7.9%.

Keywords

* Artificial intelligence  * Attention  * Object detection  * Tracking