Loading Now

Summary of Attention-based Feature Compression For Cnn Inference Offloading in Edge Computing, by Nan Li et al.


Attention-based Feature Compression for CNN Inference Offloading in Edge Computing

by Nan Li, Alexandros Iosifidis, Qi Zhang

First submitted to arxiv on: 24 Nov 2022

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel autoencoder-based CNN architecture (AECNN) is proposed for efficient feature extraction at end-devices in device-edge co-inference systems. The AECNN uses a feature compression module based on channel attention to compress intermediate data, reducing communication overhead. Entropy encoding is employed to remove statistical redundancy and a lightweight decoder is designed to reconstruct the intermediate data with improved accuracy. Experimental results show that AECNN can compress data by over 256x with only a 4% loss in accuracy, outperforming BottleNet++. Compared to offloading inference tasks directly to edge servers, AECNN demonstrates faster task completion under poor wireless channel conditions.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper creates a new way for devices and edge computers to work together. They make a special type of neural network that can help with this process by reducing the amount of data that needs to be sent between devices. This makes things faster and more efficient, especially when the connection is slow or unreliable. The results show that their method works really well and could be used in real-life applications.

Keywords

* Artificial intelligence  * Attention  * Autoencoder  * Cnn  * Decoder  * Feature extraction  * Inference  * Neural network