Loading Now

Summary of A Topology-aware Graph Coarsening Framework For Continual Graph Learning, by Xiaoxue Han et al.


A Topology-aware Graph Coarsening Framework for Continual Graph Learning

by Xiaoxue Han, Zhuo Feng, Yue Ning

First submitted to arxiv on: 5 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel continual learning framework for training graph neural networks (GNNs) on streaming graph data. Traditional methods, such as Experience Replay, can be adapted to this setting but often struggle with preserving graph topology and capturing correlations between old and new tasks. The proposed framework, called TA, addresses these challenges by storing information from previous tasks as a reduced graph that expands and contracts at each time period while maintaining topological information. The authors design a graph coarsening algorithm based on node representation proximities to efficiently reduce the graph and preserve its topology. The framework is validated on three real-world datasets using different backbone GNN models.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper solves a problem in machine learning where it’s hard to train a model that learns from new data without forgetting what it learned before. This is important because we often get more data over time, and we want the model to keep improving. The authors created a new way to learn on graphs, which are used for things like social network analysis or recommender systems. Their method stores information from previous tasks in a way that makes it easy to use later. They also developed an algorithm to make this storage more efficient by combining similar nodes and reducing the graph size while keeping its structure intact. The authors tested their framework on three real-world datasets using different types of models, showing it’s effective.

Keywords

* Artificial intelligence  * Continual learning  * Gnn  * Machine learning