Summary of Small Scale Data-free Knowledge Distillation, by He Liu et al.
Small Scale Data-Free Knowledge Distillationby He Liu, Yikai Wang, Huaping Liu, Fuchun Sun, Anbang YaoFirst…
Small Scale Data-Free Knowledge Distillationby He Liu, Yikai Wang, Huaping Liu, Fuchun Sun, Anbang YaoFirst…
TernaryLLM: Ternarized Large Language Modelby Tianqi Chen, Zhe Li, Weixiang Xu, Zeyu Zhu, Dong Li,…
DKDL-Net: A Lightweight Bearing Fault Detection Model via Decoupled Knowledge Distillation and Low-Rank Adaptation Fine-tuningby…
Mutual Information Guided Backdoor Mitigation for Pre-trained Encodersby Tingxu Han, Weisong Sun, Ziqi Ding, Chunrong…
Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher Modelby Jinyin Chen, Xiaoming Zhao,…
Tiny models from tiny data: Textual and null-text inversion for few-shot distillationby Erik Landolsi, Fredrik…
Adversarial Moment-Matching Distillation of Large Language Modelsby Chen JiaFirst submitted to arxiv on: 5 Jun…
PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detectionby Ronghui Xu, Hao Miao, Senzhang…
Vision-Language Meets the Skeleton: Progressively Distillation with Cross-Modal Knowledge for 3D Action Representation Learningby Yang…
Adv-KD: Adversarial Knowledge Distillation for Faster Diffusion Samplingby Kidist Amde Mekonnen, Nicola Dall'Asen, Paolo RotaFirst…