Loading Now

Summary of E2gnn: Efficient Graph Neural Network Ensembles For Semi-supervised Classification, by Xin Zhang et al.


E2GNN: Efficient Graph Neural Network Ensembles for Semi-Supervised Classification

by Xin Zhang, Daochen Zha, Qiaoyu Tan

First submitted to arxiv on: 6 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores ensemble learning for graph neural networks (GNNs) under a semi-supervised setting. Ensemble learning has been shown to improve accuracy and robustness in traditional machine learning by combining multiple weak learners. However, integrating different GNN models is challenging due to the poor inference ability of GNNs and limited performance when trained with few labeled nodes. The authors propose an efficient ensemble learner, E2GNN, which combines multiple GNN models in a learnable way using both labeled and unlabeled nodes. The approach involves pre-training different GNN models, training a multi-layer perceptron (MLP) to mimic their predictions, and deploying the unified MLP model for label inference. To address issues with wrongly predicted nodes, a reinforced discriminator is developed to filter out incorrect predictions. The authors demonstrate the superiority of E2GNN through comprehensive experiments on 8 benchmark datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at ways to make graph neural networks (GNNs) work better together by combining the results of multiple GNN models. GNNs are good at processing data that is connected in a network, but they can be tricky to use because they’re not very good at making predictions when there’s not much labeled data available. The authors come up with a new way to combine different GNN models, called E2GNN, which uses both labeled and unlabeled data to make better predictions. They also develop a way to filter out predictions that are likely to be wrong, which helps the system work even better.

Keywords

» Artificial intelligence  » Gnn  » Inference  » Machine learning  » Semi supervised