Summary of Fada: Fast Diffusion Avatar Synthesis with Mixed-supervised Multi-cfg Distillation, by Tianyun Zhong et al.
FADA: Fast Diffusion Avatar Synthesis with Mixed-Supervised Multi-CFG Distillationby Tianyun Zhong, Chao Liang, Jianwen Jiang,…
FADA: Fast Diffusion Avatar Synthesis with Mixed-Supervised Multi-CFG Distillationby Tianyun Zhong, Chao Liang, Jianwen Jiang,…
Phi-4 Technical Reportby Marah Abdin, Jyoti Aneja, Harkirat Behl, Sébastien Bubeck, Ronen Eldan, Suriya Gunasekar,…
Align-KD: Distilling Cross-Modal Alignment Knowledge for Mobile Vision-Language Modelby Qianhan Feng, Wenshuo Li, Tong Lin,…
Local vs. Global: Local Land-Use and Land-Cover Models Deliver Higher Quality Mapsby Girmaw Abebe Tadesse, Caleb…
When Babies Teach Babies: Can student knowledge sharing outperform Teacher-Guided Distillation on small datasets?by Srikrishna…
Adversarial Prompt Distillation for Vision-Language Modelsby Lin Luo, Xin Wang, Bojia Zi, Shihao Zhao, Xingjun…
Self-supervised cross-modality learning for uncertainty-aware object detection and recognition in applications which lack pre-labelled training…
Pre-training Distillation for Large Language Models: A Design Space Explorationby Hao Peng, Xin Lv, Yushi…
Self-Supervised Keypoint Detection with Distilled Depth Keypoint Representationby Aman Anand, Elyas Rashno, Amir Eskandari, Farhana…
Unlearning Backdoor Attacks for LLMs with Weak-to-Strong Knowledge Distillationby Shuai Zhao, Xiaobao Wu, Cong-Duy Nguyen,…