Loading Now

Summary of Efficient Neural Common Neighbor For Temporal Graph Link Prediction, by Xiaohui Zhang et al.


by Xiaohui Zhang, Yanbo Wang, Xiyuan Wang, Muhan Zhang

First submitted to arxiv on: 12 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Social and Information Networks (cs.SI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel approach for predicting dynamic links between nodes in temporal graphs, which are crucial in real-world scenarios such as social networks and transportation systems. The traditional methods focus on learning individual node representations using temporal neighborhood information, but overlook the pairwise representation learning nature of link prediction. To address this issue, the authors introduce TNCN (Temporal Neural Common Neighbor), a temporal version of NCN for static graph link prediction. TNCN dynamically updates a temporal neighbor dictionary for each node and utilizes multi-hop common neighbors to learn a more effective pairwise representation. The model is evaluated on five large-scale real-world datasets from the Temporal Graph Benchmark, achieving state-of-the-art performance on three datasets. Additionally, TNCN demonstrates excellent scalability, outperforming popular GNN baselines by up to 6.4 times in speed.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper solves a big problem: predicting what will happen between people or things over time. This is important for social networks, transportation systems, and more. Right now, computers use old methods that focus on individual things, but they don’t understand the connections between them. The new method, TNCN, looks at how these connections change over time to make better predictions. It works really well on big datasets and can do it much faster than other popular computer programs.

Keywords

» Artificial intelligence  » Gnn  » Representation learning