Loading Now

Summary of Scaling Laws For Precision, by Tanishq Kumar et al.


Scaling Laws for Precision

by Tanishq Kumar, Zachary Ankner, Benjamin F. Spector, Blake Bordelon, Niklas Muennighoff, Mansheej Paul, Cengiz Pehlevan, Christopher Ré, Aditi Raghunathan

First submitted to arxiv on: 7 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, the authors propose “precision-aware” scaling laws for training and inference of language models in varying precision levels. The current scaling laws do not account for the impact of low precision training and inference on model quality and cost. The authors find that training in lower precision reduces the effective parameter count, allowing them to predict the loss incurred from low-precision training and post-train quantization. For inference, they show that the degradation introduced by post-training quantization increases as models are trained on more data, making additional pretraining data actively harmful. The authors unify their scaling laws for post- and pre-training quantization to arrive at a single functional form that predicts degradation from training and inference in varied precisions.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about finding new ways to train language models so they work well even when using lower precision hardware, which can be cheaper but slower. The authors want to make sure their methods are efficient and don’t sacrifice quality. They discovered that training smaller models in lower precision can actually be faster than training larger models in higher precision. The paper also shows how the amount of data used for pretraining affects the model’s performance.

Keywords

» Artificial intelligence  » Inference  » Precision  » Pretraining  » Quantization  » Scaling laws