Summary of Adagmlp: Adaboosting Gnn-to-mlp Knowledge Distillation, by Weigang Lu et al.
AdaGMLP: AdaBoosting GNN-to-MLP Knowledge Distillationby Weigang Lu, Ziyu Guan, Wei Zhao, Yaming YangFirst submitted to…
AdaGMLP: AdaBoosting GNN-to-MLP Knowledge Distillationby Weigang Lu, Ziyu Guan, Wei Zhao, Yaming YangFirst submitted to…
Adversarial Training via Adaptive Knowledge Amalgamation of an Ensemble of Teachersby Shayan Mohajer Hamidi, Linfeng…
GeoMask3D: Geometrically Informed Mask Selection for Self-Supervised Point Cloud Learning in 3Dby Ali Bahri, Moslem…
Distilling Diffusion Models into Conditional GANsby Minguk Kang, Richard Zhang, Connelly Barnes, Sylvain Paris, Suha…
Why does Knowledge Distillation Work? Rethink its Attention and Fidelity Mechanismby Chenqi Guo, Shiwei Zhong,…
Noisy Node Classification by Bi-level Optimization based Multi-teacher Distillationby Yujing Liu, Zongqian Wu, Zhengyu Lu,…
ReffAKD: Resource-efficient Autoencoder-based Knowledge Distillationby Divyang Doshi, Jung-Eun KimFirst submitted to arxiv on: 15 Apr…
Improve Knowledge Distillation via Label Revision and Data Selectionby Weichao Lan, Yiu-ming Cheung, Qing Xu,…
Scheduled Knowledge Acquisition on Lightweight Vector Symbolic Architectures for Brain-Computer Interfacesby Yejia Liu, Shijin Duan,…
FlyKD: Graph Knowledge Distillation on the Fly with Curriculum Learningby Eugene KuFirst submitted to arxiv…