Summary of Cvtgad: Simplified Transformer with Cross-view Attention For Unsupervised Graph-level Anomaly Detection, by Jindong Li et al.
CVTGAD: Simplified Transformer with Cross-View Attention for Unsupervised Graph-level Anomaly Detection
by Jindong Li, Qianli Xing, Qi Wang, Yi Chang
First submitted to arxiv on: 3 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed Simplified Transformer with Cross-View Attention for Unsupervised Graph-level Anomaly Detection (CVTGAD) addresses the limitations of existing methods in unsupervised graph-level anomaly detection (UGAD). By constructing a simplified transformer-based module and designing a cross-view attention mechanism, CVTGAD increases its receptive field to capture both intra-graph and inter-graph relationships. This novel approach realizes collaboration between graph neural networks and transformers, outperforming previous methods on 15 real-world datasets across three fields. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary CVTGAD is a new way to find unusual patterns in graphs without any training data. Existing methods only looked at small parts of the graph, but CVTGAD looks at the whole graph and also connects different views together. This helps it find anomalies that other methods might miss. The team tested CVTGAD on many real-world datasets and found that it works better than previous methods. |
Keywords
» Artificial intelligence » Anomaly detection » Attention » Transformer » Unsupervised