Loading Now

Summary of Smartmem: Layout Transformation Elimination and Adaptation For Efficient Dnn Execution on Mobile, by Wei Niu et al.


SmartMem: Layout Transformation Elimination and Adaptation for Efficient DNN Execution on Mobile

by Wei Niu, Md Musfiqur Rahman Sanim, Zhihao Shu, Jiexiong Guan, Xipeng Shen, Miao Yin, Gagan Agrawal, Bin Ren

First submitted to arxiv on: 21 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents SmartMem, a framework for optimizing the performance of deep neural networks (DNNs) on mobile devices. The researchers focus on transformer architectures, specifically those with computationally efficient Swin-like designs, and large models such as Stable Diffusion and Language Models (LLMs). They observe that layout transformations between computational operators cause significant slowdowns in these applications. To address this issue, the authors introduce a classification-based approach for eliminating most layout transformations by grouping operators into four categories and considering producer-consumer edges between them. Additionally, they develop methods for searching optimal layouts and efficient memory layouts for 2.5-dimensional memory commonly found in mobile devices. Experimental results show that SmartMem outperforms five state-of-the-art DNN execution frameworks on mobile devices across various neural networks, including CNNs, Transformers with local and global attention, and LLMs.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making artificial intelligence (AI) work better on smartphones. The AI models used are called transformers, which are important for tasks like language translation and image recognition. However, these models slow down a lot when used on mobile devices because of the way they process information. To solve this problem, the researchers created a new framework called SmartMem that can optimize how these AI models work on smartphones. They showed that their approach can make AI run up to 7 times faster than current solutions on some tasks.

Keywords

» Artificial intelligence  » Attention  » Classification  » Diffusion  » Transformer  » Translation