Summary of Deep Learning Inference on Heterogeneous Mobile Processors: Potentials and Pitfalls, by Sicong Liu et al.
Deep Learning Inference on Heterogeneous Mobile Processors: Potentials and Pitfalls
by Sicong Liu, Wentao Zhou, Zimu Zhou, Bin Guo, Minfan Wang, Cheng Fang, Zheng Lin, Zhiwen Yu
First submitted to arxiv on: 3 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates the feasibility of deploying deep learning (DL) models on resource-constrained mobile devices. By leveraging the multi-core processing units in these devices, parallel execution can be employed to accelerate DL inference. The authors explore various efficient methods to optimize computation distribution and minimize communication costs across heterogeneous processors. However, their practical effectiveness in real-world scenarios is less understood. This study conducts a comprehensive empirical analysis to evaluate the capabilities and challenges of parallel DL inference on mobile processors. The research covers various DL models, software/hardware environments, workload patterns, and resource availability. By identifying limitations of existing techniques, this study highlights opportunities for cross-level optimization. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how to use special computer chips in phones to make deep learning (DL) work faster. Phones have many small processors that can do calculations together, which could speed up DL tasks like image recognition. The researchers try out different ways of dividing up the work between these processors and minimizing messages sent back and forth. They find that some methods are better than others for certain types of DL models or phone environments. This study helps us understand how to make DL work better on phones. |
Keywords
» Artificial intelligence » Deep learning » Inference » Optimization