Summary of Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher Model, by Jinyin Chen et al.
Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher Modelby Jinyin Chen, Xiaoming Zhao,…
Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher Modelby Jinyin Chen, Xiaoming Zhao,…
Tiny models from tiny data: Textual and null-text inversion for few-shot distillationby Erik Landolsi, Fredrik…
Decision Boundary-aware Knowledge Consolidation Generates Better Instance-Incremental Learnerby Qiang Nie, Weifu Fu, Yuhuan Lin, Jialin…
FedDr+: Stabilizing Dot-regression with Global Feature Distillation for Federated Learningby Seongyoon Kim, Minchan Jeong, Sungnyun…
Can Dense Connectivity Benefit Outlier Detection? An Odyssey with NASby Hao Fu, Tunhou Zhang, Hai…
Guided Score identity Distillation for Data-Free One-Step Text-to-Image Generationby Mingyuan Zhou, Zhendong Wang, Huangjie Zheng,…
LLM and GNN are Complementary: Distilling LLM for Multimodal Graph Learningby Junjie Xu, Zongyu Wu,…
Vision-Language Meets the Skeleton: Progressively Distillation with Cross-Modal Knowledge for 3D Action Representation Learningby Yang…
Improving the Training of Rectified Flowsby Sangyun Lee, Zinan Lin, Giulia FantiFirst submitted to arxiv…
Diffusion Policies creating a Trust Region for Offline Reinforcement Learningby Tianyu Chen, Zhendong Wang, Mingyuan…