Summary of Dilm: Distilling Dataset Into Language Model For Text-level Dataset Distillation, by Aru Maekawa et al.
DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillationby Aru Maekawa, Satoshi Kosugi, Kotaro…
DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillationby Aru Maekawa, Satoshi Kosugi, Kotaro…
Orchestrate Latent Expertise: Advancing Online Continual Learning with Multi-Level Supervision and Reverse Self-Distillationby HongWei Yan,…
The Need for Speed: Pruning Transformers with One Recipeby Samir Khaki, Konstantinos N. PlataniotisFirst submitted…
Deep Support Vectorsby Junhoo Lee, Hyunho Lee, Kyomin Hwang, Nojun KwakFirst submitted to arxiv on:…
Exploring the potential of prototype-based soft-labels data distillation for imbalanced data classificationby Radu-Andrei Rosu, Mihaela-Elena…
Exploring the Impact of Dataset Bias on Dataset Distillationby Yao Lu, Jianyang Gu, Xuguang Chen,…
Attention is all you need for boosting graph convolutional neural networkby Yinwei WuFirst submitted to…
Editing Massive Concepts in Text-to-Image Diffusion Modelsby Tianwei Xiong, Yue Wu, Enze Xie, Yue Wu,…
REAL: Representation Enhanced Analytic Learning for Exemplar-free Class-incremental Learningby Run He, Huiping Zhuang, Di Fang,…
Ground-A-Score: Scaling Up the Score Distillation for Multi-Attribute Editingby Hangeol Chang, Jinho Chang, Jong Chul…