Loading Now

Summary of Arctic-snowcoder: Demystifying High-quality Data in Code Pretraining, by Yuxiang Wei et al.


Arctic-SnowCoder: Demystifying High-Quality Data in Code Pretraining

by Yuxiang Wei, Hojae Han, Rajhans Samdani

First submitted to arxiv on: 3 Sep 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper introduces Arctic-SnowCoder-1.3B, a language model pre-trained on 555 billion tokens through three phases of progressively refined data. The model achieves state-of-the-art performance on BigCodeBench, a coding benchmark focusing on practical and challenging programming tasks, outperforming similarly sized models trained on up to 1 trillion tokens. Arctic-SnowCoder-1.3B also matches the performance of leading small base code models trained on trillions of tokens, surpassing StarCoder2-3B on HumanEval+, a benchmark that evaluates function-level code generation. The paper presents a comprehensive analysis justifying various design choices for Arctic-SnowCoder and highlights the importance of aligning high-quality data with the distribution of downstream applications.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper talks about how to make language models better. It shows that giving them good training data is important, but it’s not clear what makes “good” data. The researchers created a model called Arctic-SnowCoder-1.3B and trained it on 555 billion tokens of code in three steps. They tested it and found that it did well on a coding test, even better than models trained on much more data. This shows that giving the model good training data is important.

Keywords

» Artificial intelligence  » Language model