Loading Now

Summary of Subgraph Pooling: Tackling Negative Transfer on Graphs, by Zehong Wang et al.


Subgraph Pooling: Tackling Negative Transfer on Graphs

by Zehong Wang, Zheyuan Zhang, Chuxu Zhang, Yanfang Ye

First submitted to arxiv on: 14 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Social and Information Networks (cs.SI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses the issue of negative transfer in graph-structured data, where knowledge from a source task does not improve performance on a target task. The authors find that structural differences between graphs can significantly amplify dissimilarities in node embeddings, leading to reduced performance. To mitigate this, they introduce Subgraph Pooling (SP) and Subgraph Pooling++ (SP++) methods that aggregate nodes sampled from a k-hop neighborhood or using a random walk. These methods reduce the impact of graph structural differences on knowledge transfer, making them effective yet elegant additions to any Graph Neural Network (GNN). The proposed SP methods are theoretically analyzed and experimentally evaluated under various settings, demonstrating their superiority.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about how computers learn from one type of data, like a social network, and apply that learning to another similar dataset. Sometimes this doesn’t work well because the two datasets have different structures or patterns. The authors found out why this happens and came up with new ways to make it work better. They developed special techniques called Subgraph Pooling (SP) and Subgraph Pooling++ (SP++) that help computers learn from one type of data and apply that learning to another similar dataset, even if they have different structures or patterns. This is important because it can be used in many areas like social media analysis or recommendation systems.

Keywords

* Artificial intelligence  * Gnn  * Graph neural network