Summary of Divide-or-conquer? Which Part Should You Distill Your Llm?, by Zhuofeng Wu et al.
Divide-or-Conquer? Which Part Should You Distill Your LLM?by Zhuofeng Wu, He Bai, Aonan Zhang, Jiatao…
Divide-or-Conquer? Which Part Should You Distill Your LLM?by Zhuofeng Wu, He Bai, Aonan Zhang, Jiatao…
Enhancing One-Shot Federated Learning Through Data and Ensemble Co-Boostingby Rong Dai, Yonggang Zhang, Ang Li,…
Distillation Contrastive Decoding: Improving LLMs Reasoning with Contrastive Decoding and Distillationby Phuc Phan, Hieu Tran,…
Wisdom of Committee: Distilling from Foundation Model to Specialized Application Modelby Zichang Liu, Qingyun Liu,…
SDXL-Lightning: Progressive Adversarial Diffusion Distillationby Shanchuan Lin, Anran Wang, Xiao YangFirst submitted to arxiv on:…
Improve Cross-Architecture Generalization on Dataset Distillationby Binglin Zhou, Linhao Zhong, Wentao ChenFirst submitted to arxiv…
Teacher as a Lenient Expert: Teacher-Agnostic Data-Free Knowledge Distillationby Hyunjune Shin, Dong-Wan ChoiFirst submitted to…
DB-LLM: Accurate Dual-Binarization for Efficient LLMsby Hong Chen, Chengtao Lv, Liang Ding, Haotong Qin, Xiabin…
GraphKD: Exploring Knowledge Distillation Towards Document Object Detection with Structured Graph Creationby Ayan Banerjee, Sanket…
Multi-modal Preference Alignment Remedies Degradation of Visual Instruction Tuning on Language Modelsby Shengzhi Li, Rongyu…