Loading Now

Summary of I-splitee: Image Classification in Split Computing Dnns with Early Exits, by Divya Jyoti Bajpai et al.


I-SplitEE: Image classification in Split Computing DNNs with Early Exits

by Divya Jyoti Bajpai, Aastha Jaiswal, Manjesh Kumar Hanawal

First submitted to arxiv on: 19 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computer Vision and Pattern Recognition (cs.CV); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents a unified approach merging early exits within Deep Neural Networks (DNNs) and partial cloud computation offloading, dubbed split computing. The proposed method determines the optimal depth in the DNN for edge device computations, considering accuracy, computational efficiency, and communication costs. The authors also introduce I-SplitEE, an online unsupervised algorithm ideal for scenarios lacking ground truths and with sequential data. Experimental validation using Caltech-256 and Cifar-10 datasets subjected to varied distortions showcases I-SplitEE’s ability to reduce costs by a minimum of 55% with marginal performance degradation of at most 5%. This approach has the potential to improve the deployment of DNNs on resource-constrained devices, such as edge, mobile, and IoT platforms.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making Deep Neural Networks (DNNs) work better on small devices like phones or smart home appliances. Right now, these devices can’t handle very large DNNs because they need a lot of computing power and memory. The authors are trying to find a way to make the DNNs smaller while still keeping them accurate. They also created an algorithm that can adapt to different lighting conditions and weather, which is important for image recognition tasks like self-driving cars or facial recognition. The results show that this approach can reduce the computing power needed by up to 55% without sacrificing too much accuracy.

Keywords

* Artificial intelligence  * Unsupervised