Summary of Accel-nasbench: Sustainable Benchmarking For Accelerator-aware Nas, by Afzal Ahmad et al.
Accel-NASBench: Sustainable Benchmarking for Accelerator-Aware NAS
by Afzal Ahmad, Linfeng Du, Zhiyao Xie, Wei Zhang
First submitted to arxiv on: 9 Apr 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Image and Video Processing (eess.IV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
| Summary difficulty | Written by | Summary |
|---|---|---|
| High | Paper authors | High Difficulty Summary Read the original abstract here |
| Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed technique significantly reduces the computational cost of constructing Neural Architecture Search (NAS) benchmarks by allowing the search for training proxies. This approach enables the construction of realistic NAS benchmarks for large-scale datasets, such as ImageNet2012, combined with on-device performance metrics for various accelerators like GPUs, TPUs, and FPGAs. The authors demonstrate the accuracy of this benchmark through extensive experimentation with different NAS optimizers and hardware platforms. |
| Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps make Neural Architecture Search (NAS) more efficient by reducing its reliance on computers. Currently, NAS experiments require a lot of computing power, which can be expensive and time-consuming. To solve this problem, the researchers created a new way to find training proxies that reduce the cost of building these benchmarks. They used this technique to create an open-source benchmark for searching for the best hardware-aware models for large datasets like ImageNet2012. |




