Summary of Preserving Node Distinctness in Graph Autoencoders Via Similarity Distillation, by Ge Chen et al.
Preserving Node Distinctness in Graph Autoencoders via Similarity Distillationby Ge Chen, Yulan Hu, Sheng Ouyang,…
Preserving Node Distinctness in Graph Autoencoders via Similarity Distillationby Ge Chen, Yulan Hu, Sheng Ouyang,…
OAL: Enhancing OOD Detection Using Latent Diffusionby Heng Gao, Zhuolin He, Shoumeng Qiu, Jian PuFirst…
Continual Learning with Diffusion-based Generative Replay for Industrial Streaming Databy Jiayi He, Jiao Chen, Qianmiao…
Can Low-Rank Knowledge Distillation in LLMs be Useful for Microelectronic Reasoning?by Nirjhor Rouf, Fin Amin,…
Federated Learning with a Single Shared Imageby Sunny Soni, Aaqib Saeed, Yuki M. AsanoFirst submitted…
Graph Knowledge Distillation to Mixture of Expertsby Pavel Rumiantsev, Mark CoatesFirst submitted to arxiv on:…
Knowledge Distillation in Federated Learning: a Survey on Long Lasting Challenges and New Solutionsby Laiqiao…
Adaptive Teaching with Shared Classifier for Knowledge Distillationby Jaeyeon Jang, Young-Ik Kim, Jisu Lim, Hyeonseong…
DistilDoc: Knowledge Distillation for Visually-Rich Document Applicationsby Jordy Van Landeghem, Subhajit Maity, Ayan Banerjee, Matthew…
Self-Distillation Learning Based on Temporal-Spatial Consistency for Spiking Neural Networksby Lin Zuo, Yongqi Ding, Mengmeng…