Summary of Sub-graph Based Diffusion Model For Link Prediction, by Hang Li et al.
Sub-graph Based Diffusion Model for Link Prediction
by Hang Li, Wei Jin, Geri Skenderi, Harry Shomer, Wenzhuo Tang, Wenqi Fan, Jiliang Tang
First submitted to arxiv on: 13 Sep 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary DDPMs have been gaining attention for their exceptional qualities in both synthesis and maximizing the data likelihood. These models work by traversing a forward Markov Chain where data is perturbed, followed by a reverse process where a neural network learns to undo the perturbations and recover the original data. Recent efforts have explored DDPMs’ applications in the graph domain, focusing on generative perspectives. This paper aims to build a novel generative model for link prediction, treating it as a conditional likelihood estimation of its enclosing sub-graph. The proposed method decomposes the likelihood estimation process via Bayesian formulas, allowing simultaneous enjoyment of inductive learning and strong generalization capability. Experimental results across various datasets demonstrate advantages including transferability without retraining, promising generalization on limited training data, and robustness against graph adversarial attacks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary DDPMs are a type of generative model that can create realistic images. They work by adding noise to an image and then using a neural network to remove the noise and get back to the original image. Researchers have been trying to use these models for tasks like creating new graphs, but most of their efforts have focused on making more images, not just graphs. This paper tries to change that by building a model that can predict links between nodes in a graph. The idea is to treat link prediction as estimating the likelihood of seeing a certain sub-graph around two nodes. The proposed method breaks down this estimation process into smaller parts, allowing it to learn and generalize well. Tests on different datasets show that this approach has many benefits, such as being able to use knowledge learned from one graph to predict links in another without retraining, working well even with limited training data, and being resistant to attacks. |
Keywords
» Artificial intelligence » Attention » Generalization » Generative model » Likelihood » Neural network » Transferability