Summary of Robust Training Of Temporal Gnns Using Nearest Neighbours Based Hard Negatives, by Shubham Gupta et al.
Robust Training of Temporal GNNs using Nearest Neighbours based Hard Negatives
by Shubham Gupta, Srikanta Bedathur
First submitted to arxiv on: 14 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Information Retrieval (cs.IR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes an improvement to Temporal Graph Neural Networks (TGNN) for future-link prediction tasks. Current TGNN training methods use uniform random sampling and unsupervised loss, but this approach has limitations. Specifically, the uniform negative sampling can lead to redundancy and sub-optimal performance. To address this issue, the authors propose a modified unsupervised learning method that replaces uniform negative sampling with importance-based negative sampling. The paper theoretically motivates and defines a dynamically computed distribution for negative example sampling. Empirical evaluations on three real-world datasets demonstrate that TGNN trained using the proposed loss function outperforms previous methods. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making Temporal Graph Neural Networks better at predicting future links. Right now, these networks are trained in a way that’s not very effective because it uses random negative examples. The authors of this paper have a new idea to improve this training method by choosing the most important negative examples. They explain why this is a good idea and show how to do it. Then, they test their new approach on three real-world datasets and find that it works better than before. |
Keywords
* Artificial intelligence * Loss function * Unsupervised