Summary of Prioritize Alignment in Dataset Distillation, by Zekai Li et al.
Prioritize Alignment in Dataset Distillation
by Zekai Li, Ziyao Guo, Wangbo Zhao, Tianle Zhang, Zhi-Qi Cheng, Samir Khaki, Kaipeng Zhang, Ahmad Sajedi, Konstantinos N Plataniotis, Kai Wang, Yang You
First submitted to arxiv on: 6 Aug 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The abstract discusses Dataset Distillation, a method that compresses large datasets into smaller, synthetic ones without affecting the performance of trained models. Current methods use an agent model to extract information from the target dataset and embed it into the distilled dataset. However, these methods introduce misaligned information in both stages. To address this issue, the authors propose Prioritize Alignment in Dataset Distillation (PAD), which filters out misaligned information by pruning the target dataset according to the compressing ratio and using deep layers of the agent model for distillation. This approach brings non-trivial improvements to mainstream matching-based distillation algorithms and achieves state-of-the-art performance on various benchmarks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Dataset Distillation is a way to shrink big datasets into smaller, synthetic ones without hurting how well trained models work. Right now, methods do this by using an agent model to grab info from the big dataset and put it in the small one. But these methods mess up the information during both of those steps. To fix this, researchers created Prioritize Alignment in Dataset Distillation (PAD), which gets rid of bad info by removing unimportant parts of the big dataset and only using deep parts of the agent model to shrink the data. This new approach makes a real difference for common methods and does better than others on many tests. |
Keywords
» Artificial intelligence » Alignment » Distillation » Pruning