Summary of Self-supervised Dataset Distillation: a Good Compression Is All You Need, by Muxin Zhou and Zeyuan Yin and Shitong Shao and Zhiqiang Shen
Self-supervised Dataset Distillation: A Good Compression Is All You Need
by Muxin Zhou, Zeyuan Yin, Shitong Shao, Zhiqiang Shen
First submitted to arxiv on: 11 Apr 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Dataset distillation aims to compress information from a large-scale original dataset into a new compact dataset while preserving the informational essence. Previous studies have focused on aligning intermediate statistics between the original and distilled data, such as weight trajectory, features, gradient, BatchNorm, etc. This paper introduces SC-DD, a self-supervised compression framework for dataset distillation that leverages larger variances in BN statistics from self-supervised models to update the recovered data by gradients, enabling more informativeness during synthesis. The proposed approach outperforms state-of-the-art supervised dataset distillation methods when employing larger models, such as SRe^2L, MTT, TESLA, DC, CAFE, etc., by large margins under the same recovery and post-training budgets. Extensive experiments are conducted on CIFAR-100, Tiny-ImageNet, and ImageNet-1K datasets to demonstrate the superiority of SC-DD. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making big datasets smaller while keeping all the important information. It’s like taking a bunch of photos and compressing them so they take up less space on your phone. The authors created a new way to do this called SC-DD, which uses special models that can learn from themselves. This helps to make the compressed dataset better than previous methods. They tested it on different datasets and showed that it works really well. |
Keywords
» Artificial intelligence » Distillation » Self supervised » Supervised