Loading Now

Summary of Better Schedules For Low Precision Training Of Deep Neural Networks, by Cameron R. Wolfe and Anastasios Kyrillidis


Better Schedules for Low Precision Training of Deep Neural Networks

by Cameron R. Wolfe, Anastasios Kyrillidis

First submitted to arxiv on: 4 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed cyclic precision training (CPT) method dynamically adjusts precision throughout DNN training according to a cyclic schedule, achieving impressive improvements in training efficiency while maintaining or even improving model performance. The authors define a diverse suite of CPT schedules and analyze their performance across various DNN training regimes, including unexplored areas like node classification with graph neural networks. The study discovers alternative CPT schedules that offer further improvements in training efficiency and model performance, as well as derives best practices for choosing CPT schedules. Additionally, the research finds a direct correlation between model performance and training cost, suggesting that aggressive quantization can permanently damage model performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
Cyclic precision training (CPT) is a way to make deep neural networks (DNNs) train faster without losing accuracy. The paper looks at different CPT schedules and how they work for various types of DNNs. It finds some new schedules that are even better than others, and shows how to choose the best schedule for your specific task. The study also discovers a surprising connection between how well a model performs and how much it costs to train it.

Keywords

* Artificial intelligence  * Classification  * Precision  * Quantization