Loading Now

Summary of Teach Harder, Learn Poorer: Rethinking Hard Sample Distillation For Gnn-to-mlp Knowledge Distillation, by Lirong Wu et al.


Teach Harder, Learn Poorer: Rethinking Hard Sample Distillation for GNN-to-MLP Knowledge Distillation

by Lirong Wu, Yunfan Liu, Haitao Lin, Yufei Huang, Stan Z. Li

First submitted to arxiv on: 20 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents a new approach to distill knowledge from Graph Neural Networks (GNNs) into Multi-Layer Perceptrons (MLPs), called GNN-to-MLP Knowledge Distillation (KD). The authors identify that the hardness of sample nodes in teacher GNNs is a bottleneck in existing graph KD algorithms. They propose a Hardness-aware GNN-to-MLP Distillation (HGMD) framework, which decouples two types of hardness: student-free knowledge hardness and student-dependent distillation hardness. Two hardness-aware distillation schemes, HGMD-weight and HGMD-mixup, are also proposed to transfer hardness-aware knowledge from teacher GNNs to student MLPs. The authors demonstrate that their approach outperforms state-of-the-art competitors on seven real-world datasets, with average improvements of 2.48% over teacher GNNs.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps computers learn better by taking what they know from powerful models and teaching simpler models to do the same things. The authors found that some complex computer learning tasks are harder than others, so they developed a new way to teach simpler models how to do these hard tasks. They tested their approach on several real-world problems and showed that it works well, improving performance by 2.48% compared to more advanced models.

Keywords

* Artificial intelligence  * Distillation  * Gnn  * Knowledge distillation