Summary of Graph Pre-training Models Are Strong Anomaly Detectors, by Jiashun Cheng et al.
Graph Pre-Training Models Are Strong Anomaly Detectors
by Jiashun Cheng, Zinan Zheng, Yang Liu, Jianheng Tang, Hongwei Wang, Yu Rong, Jia Li, Fugee Tsung
First submitted to arxiv on: 24 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper explores the effectiveness of Graph Neural Networks (GNNs) in detecting anomalies in graphs, specifically highlighting the potential of graph pre-training models as strong graph anomaly detectors. The authors demonstrate that pre-training outperforms state-of-the-art end-to-end training models when faced with limited supervision. This is attributed to pre-training’s ability to enhance the detection of distant, under-represented, unlabeled anomalies beyond 2-hop neighborhoods of known anomalies. Additionally, the paper extends its examination to graph-level anomaly detection and offers valuable insights for future research. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Graphs have many anomalies that need to be detected. This is hard because GNNs have to learn node representations and a classifier at the same time. There are some models like DGI and GraphMAE that can pre-train on graphs before using them for other tasks. But how well do these models work in detecting anomalies? The researchers found that pre-training models are really good at finding anomalies, even when they have limited information. This is because pre-training helps detect anomalies that are far away from known anomalies. They also looked at graph-level anomaly detection and found that pre-training can be useful there too. |
Keywords
» Artificial intelligence » Anomaly detection