Loading Now

Summary of Data Overfitting For On-device Super-resolution with Dynamic Algorithm and Compiler Co-design, by Gen Li et al.


Data Overfitting for On-Device Super-Resolution with Dynamic Algorithm and Compiler Co-Design

by Gen Li, Zhihao Shu, Jie Ji, Minghai Qin, Fatemeh Afghah, Wei Niu, Xiaolong Ma

First submitted to arxiv on: 3 Jul 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel approach to video resolution upscaling, leveraging deep neural networks (DNNs), has been gaining traction in computer vision applications. The technique involves splitting videos into chunks and applying a super-resolution (SR) model to overfit each chunk, replacing traditional video transmission methods. However, this scheme requires many models and chunks, leading to significant overhead on model switching and memory footprints at the user end. To address these issues, researchers propose Dy-DCA, a Dynamic Deep neural network assisted by a Content-Aware data processing pipeline, which reduces the model number down to one while promoting performance and conserving computational resources. Furthermore, a framework is designed to optimize dynamic features in Dy-DCA, enabling compilation optimizations such as fused code generation and static execution planning. The proposed method achieves better PSNR and real-time performance (33 FPS) on an off-the-shelf mobile phone, with a 1.7speedup and up to 1.61memory savings.
Low GrooveSquid.com (original content) Low Difficulty Summary
A new way to make video look better is being developed using special kinds of computer programs called deep neural networks. This method takes videos and breaks them into smaller pieces, then uses another program to make each piece look even better. However, this process requires many different programs and lots of storage space on devices like phones. To solve these problems, scientists created a new way to use just one program while still making the video look great. They also developed ways to make the computer work more efficiently by rearranging the code and planning ahead. This method can make videos look better in real-time (33 frames per second) on regular mobile phones, using less storage space and working faster than before.

Keywords

* Artificial intelligence  * Neural network  * Super resolution