Loading Now

Summary of Mates: Model-aware Data Selection For Efficient Pretraining with Data Influence Models, by Zichun Yu et al.


MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models

by Zichun Yu, Spandan Das, Chenyan Xiong

First submitted to arxiv on: 10 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed model-aware data selection method, MATES, improves language model pretraining efficiency by selecting high-quality data from massive web corpora. The current static methods lack adaptability to evolving data preferences during pretraining. MATES uses a data influence model that continuously adapts and selects the most effective data for each pretraining stage. It collects oracle data influence through local probing of the pretraining model, approximating it with a small data influence model. This approach significantly outperforms random selection on extensive downstream tasks and reduces total FLOPs required to reach certain performances by half.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper introduces a new method called MATES that helps language models learn better from large datasets. Currently, people choose which data to use for pretraining based on simple rules or larger model references. Mates uses a special model that can adapt to the changing needs of the original language model during training. This allows it to select the most useful data for each stage of training. The results show that this approach works better than just choosing random data and requires less computer power to achieve similar results.

Keywords

» Artificial intelligence  » Language model  » Pretraining