Summary of On Latency Predictors For Neural Architecture Search, by Yash Akhauri et al.
On Latency Predictors for Neural Architecture Search
by Yash Akhauri, Mohamed S. Abdelfattah
First submitted to arxiv on: 4 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Hardware Architecture (cs.AR); Computer Vision and Pattern Recognition (cs.CV); Performance (cs.PF)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper presents a comprehensive study on latency prediction for neural networks (NN) in hardware-aware neural architecture search. The authors focus on improving the efficiency of latency prediction by designing a general predictor that can be applied to various hardware devices and NN architectures. They introduce a suite of latency prediction tasks obtained through automated partitioning of hardware device sets, which allows them to comprehensively study different aspects of latency prediction, such as predictor architecture, NN sample selection methods, hardware device representations, and NN operation encoding schemes. The authors also propose an end-to-end latency predictor training strategy that outperforms existing methods on 11 out of 12 difficult latency prediction tasks, achieving a significant improvement in latency prediction accuracy. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper is about making computers faster by predicting how long it takes for artificial intelligence (AI) models to run on different devices. Researchers want to find the best way to do this so they can make AI models work better and faster on different machines. The authors developed a new method that does a good job of predicting how long it will take for AI models to run, which is important because it can help us use AI more efficiently. |