Loading Now

Summary of Dataset Decomposition: Faster Llm Training with Variable Sequence Length Curriculum, by Hadi Pouransari et al.


Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum

by Hadi Pouransari, Chun-Liang Li, Jen-Hao Rick Chang, Pavan Kumar Anasosalu Vasu, Cem Koc, Vaishaal Shankar, Oncel Tuzel

First submitted to arxiv on: 21 May 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel technique called dataset decomposition to train large language models (LLMs) more efficiently. The current approach concatenates documents into fixed-length sequences, which can lead to quadratic attention costs and limited sequence lengths. Dataset decomposition breaks down the dataset into buckets of similar sequence lengths from different documents. This allows for variable sequence length training with a curriculum that samples from all buckets simultaneously. Compared to the baseline concat-and-chunk approach, dataset decomposition incurs computational costs proportional to actual document lengths, resulting in significant time savings. The authors train an 8k context-length model at the same cost as a 2k context-length model trained with the baseline approach and achieve better performance on long-context benchmarks. They also highlight the importance of sequence length distribution and curriculum in training LLMs.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models are complex machines that can understand and generate human-like text. However, they’re not very efficient when it comes to processing large amounts of data. This paper introduces a new way to train these models that’s much faster and more effective. Instead of breaking down big documents into small chunks, the authors divide the data into buckets based on sequence length. This allows them to train the model in smaller batches while still learning from a wide range of text lengths. As a result, they’re able to train their model up to 6 times faster than before and achieve better results.

Keywords

» Artificial intelligence  » Attention  » Context length