Loading Now

Summary of Dilm: Distilling Dataset Into Language Model For Text-level Dataset Distillation, by Aru Maekawa et al.


DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation

by Aru Maekawa, Satoshi Kosugi, Kotaro Funakoshi, Manabu Okumura

First submitted to arxiv on: 30 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Distilling dataset into Language Model (DiLM) approach compresses a training dataset by generating informative synthetic samples as text data, allowing for the training of different models and in-context learning of large language models. DiLM outperforms current coreset selection methods on various text classification datasets, demonstrating remarkable generalization performance. This paper presents a novel method for text dataset distillation that addresses the limitations of existing approaches.
Low GrooveSquid.com (original content) Low Difficulty Summary
DiLM is a new way to make training datasets smaller while still keeping them useful. It does this by teaching a language model to create helpful synthetic examples instead of just trying to optimize each one separately. This approach lets us train different kinds of models and even helps big language models learn in new situations. The results show that DiLM works better than other methods for making smaller datasets, which is important for many applications.

Keywords

» Artificial intelligence  » Distillation  » Generalization  » Language model  » Text classification