Loading Now

Summary of Unsupervised Graph Neural Architecture Search with Disentangled Self-supervision, by Zeyang Zhang et al.


Unsupervised Graph Neural Architecture Search with Disentangled Self-supervision

by Zeyang Zhang, Xin Wang, Ziwei Zhang, Guangyao Shen, Shiqi Shen, Wenwu Zhu

First submitted to arxiv on: 8 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper tackles the problem of unsupervised graph neural architecture search, where existing methods rely on supervised labels. It proposes a novel Disentangled Self-supervised Graph Neural Architecture Search (DSGAS) model to discover optimal architectures capturing latent graph factors in an unlabeled setting. The model consists of a disentangled graph super-network optimized simultaneously with factor-wise disentanglement, followed by self-supervised training with joint architecture-graph disentanglement and contrastive search with architecture augmentations. Experimental results on 11 real-world datasets demonstrate state-of-the-art performance against baseline methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper talks about finding the best way to create models for graphs (like social networks or traffic patterns) without needing labeled data. The current ways of doing this rely on labeled information, but that’s not always available. To solve this problem, the authors created a new model called DSGAS (Disentangled Self-supervised Graph Neural Architecture Search). This model can figure out what makes graphs tick and what kind of models work best for them, all without needing labels. The authors tested their model on many real-world datasets and showed that it performs better than other methods.

Keywords

* Artificial intelligence  * Self supervised  * Supervised  * Unsupervised