Loading Now

Summary of Selmatch: Effectively Scaling Up Dataset Distillation Via Selection-based Initialization and Partial Updates by Trajectory Matching, By Yongmin Lee and Hye Won Chung


SelMatch: Effectively Scaling Up Dataset Distillation via Selection-Based Initialization and Partial Updates by Trajectory Matching

by Yongmin Lee, Hye Won Chung

First submitted to arxiv on: 28 May 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This research paper proposes a novel method for dataset distillation that effectively synthesizes a small number of images per class (IPC) from a large dataset with minimal performance loss. The proposed method, SelMatch, uses selection-based initialization and partial updates through trajectory matching to manage the synthetic dataset’s desired difficulty level tailored to IPC scales. By leveraging SelMatch, researchers can consistently outperform leading selection-only and distillation-only methods across subset ratios from 5% to 30%, as demonstrated on CIFAR-10/100 and TinyImageNet datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study aims to create a smaller version of a big dataset by selecting only some images that represent the whole dataset. Current methods for doing this don’t work well when there are too few images, but the new method, SelMatch, can handle any number of images while keeping performance high. This is important because it’s hard to train models with small datasets and big datasets can be difficult to work with. The researchers tested their method on several famous image datasets and found that it works better than other methods.

Keywords

» Artificial intelligence  » Distillation