Summary of Understanding the Gains From Repeated Self-distillation, by Divyansh Pareek et al.
Understanding the Gains from Repeated Self-Distillationby Divyansh Pareek, Simon S. Du, Sewoong OhFirst submitted to…
Understanding the Gains from Repeated Self-Distillationby Divyansh Pareek, Simon S. Du, Sewoong OhFirst submitted to…
Parameter-Selective Continual Test-Time Adaptationby Jiaxu Tian, Fan LyuFirst submitted to arxiv on: 2 Jul 2024CategoriesMain:…
MuGSI: Distilling GNNs with Multi-Granularity Structural Information for Graph Classificationby Tianjun Yao, Jiaqi Sun, Defu…
InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge Distillationby Jinbin Huang, Wenbin He, Liang Gou, Liu…
Controlling Forgetting with Test-Time Data in Continual Learningby Vaibhav Singh, Rahaf Aljundi, Eugene BelilovskyFirst submitted…
Self-Distillation Learning Based on Temporal-Spatial Consistency for Spiking Neural Networksby Lin Zuo, Yongqi Ding, Mengmeng…
DKDL-Net: A Lightweight Bearing Fault Detection Model via Decoupled Knowledge Distillation and Low-Rank Adaptation Fine-tuningby…
Online Policy Distillation with Decision-Attentionby Xinqiang Yu, Chuanguang Yang, Chengqing Yu, Libo Huang, Zhulin An,…
Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher Modelby Jinyin Chen, Xiaoming Zhao,…
Decision Boundary-aware Knowledge Consolidation Generates Better Instance-Incremental Learnerby Qiang Nie, Weifu Fu, Yuhuan Lin, Jialin…