Loading Now

Summary of Dyce: Dynamically Configurable Exiting For Deep Learning Compression and Real-time Scaling, by Qingyuan Wang et al.


DyCE: Dynamically Configurable Exiting for Deep Learning Compression and Real-time Scaling

by Qingyuan Wang, Barry Cardiff, Antoine Frappé, Benoit Larras, Deepu John

First submitted to arxiv on: 4 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
DyCE is a dynamically configurable system that allows deep learning (DL) models to adjust their performance-complexity trade-off at runtime without re-initialization or redeployment. This approach decouples the design of efficient dynamic models, enabling easy adaptation to new base models and potential general use in compression and scaling. DyCE achieves this by adding small exit networks to intermediate layers of the original model, allowing computation to terminate early if acceptable results are obtained. The system also proposes methods for generating optimized configurations and determining the types and positions of exit networks to achieve desired performance and complexity trade-offs. By enabling simple configuration switching, DyCE provides fine-grained performance tuning in real-time. This approach is demonstrated through image classification tasks using deep convolutional neural networks (CNNs), reducing computational complexity by 23.5% for ResNet152 and 25.9% for ConvNextv2-tiny on ImageNet with accuracy reductions of less than 0.5%.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper introduces a new way to make deep learning models more efficient without sacrificing performance. Right now, most deep learning models are fixed and can’t adjust their complexity in real-time. The authors propose a system called DyCE that allows these models to dynamically change their complexity based on the difficulty of the task at hand. This is achieved by adding small networks to the original model that allow it to terminate early if good enough results are obtained. DyCE also provides a way to optimize its performance and adjust its complexity in real-time, making it useful for applications where computing resources are limited.

Keywords

* Artificial intelligence  * Deep learning  * Image classification