Summary of Understanding the Gains From Repeated Self-distillation, by Divyansh Pareek et al.
Understanding the Gains from Repeated Self-Distillationby Divyansh Pareek, Simon S. Du, Sewoong OhFirst submitted to…
Understanding the Gains from Repeated Self-Distillationby Divyansh Pareek, Simon S. Du, Sewoong OhFirst submitted to…
Parameter-Selective Continual Test-Time Adaptationby Jiaxu Tian, Fan LyuFirst submitted to arxiv on: 2 Jul 2024CategoriesMain:…
MuGSI: Distilling GNNs with Multi-Granularity Structural Information for Graph Classificationby Tianjun Yao, Jiaqi Sun, Defu…
InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge Distillationby Jinbin Huang, Wenbin He, Liang Gou, Liu…
Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher Modelby Jinyin Chen, Xiaoming Zhao,…
Bayesian WeakS-to-Strong from Text Classification to Generationby Ziyun Cui, Ziyang Zhang, Guangzhi Sun, Wen Wu,…
Decision Boundary-aware Knowledge Consolidation Generates Better Instance-Incremental Learnerby Qiang Nie, Weifu Fu, Yuhuan Lin, Jialin…
Disentangling and Mitigating the Impact of Task Similarity for Continual Learningby Naoki HirataniFirst submitted to…
Forward-Backward Knowledge Distillation for Continual Clusteringby Mohammadreza Sadeghi, Zihan Wang, Narges ArmanfardFirst submitted to arxiv…
SFDDM: Single-fold Distillation for Diffusion modelsby Chi Hong, Jiyue Huang, Robert Birke, Dick Epema, Stefanie…