Summary of Ed-vit: Splitting Vision Transformer For Distributed Inference on Edge Devices, by Xiang Liu et al.
ED-ViT: Splitting Vision Transformer for Distributed Inference on Edge Devices
by Xiang Liu, Yijun Song, Xia Li, Yifei Sun, Huiying Lan, Zemin Liu, Linshan Jiang, Jialin Li
First submitted to arxiv on: 15 Oct 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed ED-ViT framework is a novel Vision Transformer splitting approach designed to efficiently execute complex models across multiple edge devices for real-time data analytics. By partitioning Vision Transformer models into sub-models tailored to handle specific subsets of data classes, ED-ViT minimizes computation overhead and inference latency while maintaining test accuracy comparable to the original Vision Transformer. The framework achieves a model size reduction of up to 28.9 times and 34.1 times, respectively, making it an effective solution for deploying deep learning models on resource-constrained edge devices. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary ED-ViT is a new way to split big computer vision models into smaller pieces that can work together on small devices like smart cameras or smartphones. This makes it possible to use powerful AI models even when there isn’t much power available. The model is broken down into smaller parts, each one focusing on a specific type of image. This helps reduce the amount of computing needed and speeds up how fast the AI can make predictions. |
Keywords
» Artificial intelligence » Deep learning » Inference » Vision transformer » Vit