Summary of Ev-edge: Efficient Execution Of Event-based Vision Algorithms on Commodity Edge Platforms, by Shrihari Sridharan et al.
Ev-Edge: Efficient Execution of Event-based Vision Algorithms on Commodity Edge Platforms
by Shrihari Sridharan, Surya Selvam, Kaushik Roy, Anand Raghunathan
First submitted to arxiv on: 23 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computer Vision and Pattern Recognition (cs.CV); Distributed, Parallel, and Cluster Computing (cs.DC); Robotics (cs.RO)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
| Summary difficulty | Written by | Summary |
|---|---|---|
| High | Paper authors | High Difficulty Summary Read the original abstract here |
| Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers propose a framework called Ev-Edge to optimize the performance of event-based vision systems on edge platforms. Event cameras have high temporal resolution and dynamic range, making them suitable for autonomous navigation tasks. However, processing these events requires a mix of Artificial Neural Networks (ANNs), Spiking Neural Networks (SNNs), and hybrid SNN-ANN algorithms, which can be computationally intensive. Ev-Edge addresses this issue by optimizing the performance of event-based vision systems on edge platforms using three key optimizations: Event2Sparse Frame converter, Dynamic Sparse Frame Aggregator, and Network Mapper. These optimizations enable improved hardware utilization, reduce latency, and conserve energy. |
| Low | GrooveSquid.com (original content) | Low Difficulty Summary Event cameras are used in autonomous navigation systems because they have high temporal resolution, dynamic range, and no motion blur. Researchers need to process these events with Artificial Neural Networks (ANNs), Spiking Neural Networks (SNNs), or hybrid SNN-ANN algorithms to achieve good results. The problem is that edge platforms can’t handle these workloads efficiently because of the mismatch between event streams and algorithm characteristics on one hand, and hardware platform characteristics on the other. Ev-Edge is a framework that solves this problem by optimizing performance with three key steps: Event2Sparse Frame converter, Dynamic Sparse Frame Aggregator, and Network Mapper. This makes it faster and uses less energy. |




