Summary of Secokd: Aligning Large Language Models For In-context Learning with Fewer Shots, by Weixing Wang et al.
SeCoKD: Aligning Large Language Models for In-Context Learning with Fewer Shotsby Weixing Wang, Haojin Yang,…
SeCoKD: Aligning Large Language Models for In-Context Learning with Fewer Shotsby Weixing Wang, Haojin Yang,…
Multi-Stage Balanced Distillation: Addressing Long-Tail Challenges in Sequence-Level Knowledge Distillationby Yuhang Zhou, Jing Zhu, Paiheng…
NLDF: Neural Light Dynamic Fields for Efficient 3D Talking Head Generationby Niu GuanchenFirst submitted to…
Self-Knowledge Distillation for Learning Ambiguityby Hancheol Park, Soyeong Jeong, Sukmin Cho, Jong C. ParkFirst submitted…
CTC-based Non-autoregressive Textless Speech-to-Speech Translationby Qingkai Fang, Zhengrui Ma, Yan Zhou, Min Zhang, Yang FengFirst…
PLaD: Preference-based Large Language Model Distillation with Pseudo-Preference Pairsby Rongzhi Zhang, Jiaming Shen, Tianqi Liu,…
Decoupled Alignment for Robust Plug-and-Play Adaptationby Haozheng Luo, Jiahao Yu, Wenxin Zhang, Jialong Li, Jerry…
Multi-label Class Incremental Emotion Decoding with Augmented Emotional Semantics Learningby Kaicheng Fu, Changde Du, Xiaoyu…
WebUOT-1M: Advancing Deep Underwater Object Tracking with A Million-Scale Benchmarkby Chunhui Zhang, Li Liu, Guanjie…
Relation Modeling and Distillation for Learning with Noisy Labelsby Xiaming Che, Junlin Zhang, Zhuang Qi,…