Summary of Gnns-to-mlps by Teacher Injection and Dirichlet Energy Distillation, By Ziang Zhou et al.
GNNs-to-MLPs by Teacher Injection and Dirichlet Energy Distillationby Ziang Zhou, Zhihao Ding, Jieming Shi, Qing…
GNNs-to-MLPs by Teacher Injection and Dirichlet Energy Distillationby Ziang Zhou, Zhihao Ding, Jieming Shi, Qing…
On Distilling the Displacement Knowledge for Few-Shot Class-Incremental Learningby Pengfei Fang, Yongchun Qin, Hui XueFirst…
HEP-NAS: Towards Efficient Few-shot Neural Architecture Search via Hierarchical Edge Partitioningby Jianfeng Li, Jiawen Zhang,…
Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge Distillationby Jiaming Lv, Haoyuan Yang, Peihua LiFirst submitted…
Balancing Efficiency and Effectiveness: An LLM-Infused Approach for Optimized CTR Predictionby Guoxiao Zhang, Yi Wei,…
Enhancing CLIP Conceptual Embedding through Knowledge Distillationby Kuei-Chun KaoFirst submitted to arxiv on: 4 Dec…
Memory-efficient Continual Learning with Neural Collapse Contrastiveby Trung-Anh Dang, Vincent Nguyen, Ngoc-Son Vu, Christel VrainFirst…
COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-trainingby Sanghwan Kim, Rui Xiao, Mariana-Iuliana Georgescu, Stephan Alaniz,…
Lightweight Contenders: Navigating Semi-Supervised Text Mining through Peer Collaboration and Self Transcendenceby Qianren Mao, Weifeng…
Provable Partially Observable Reinforcement Learning with Privileged Informationby Yang Cai, Xiangyu Liu, Argyris Oikonomou, Kaiqing…