Loading Now

Summary of Graph Contrastive Invariant Learning From the Causal Perspective, by Yanhu Mo et al.


Graph Contrastive Invariant Learning from the Causal Perspective

by Yanhu Mo, Xiao Wang, Shaohua Fan, Chuan Shi

First submitted to arxiv on: 23 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Social and Information Networks (cs.SI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Graph contrastive learning (GCL), a self-supervised method for learning node representations from augmented graphs, has gained popularity. However, it’s unclear whether GCL always learns invariant representations in practice. This paper investigates GCL through the lens of causality using structural causal models (SCMs). The authors find that traditional GCL may not learn invariant representations due to non-causal information contained in the graph. To address this issue, they propose a novel GCL method incorporating spectral graph augmentation, invariance objectives, and independence objectives. Specifically, the invariance objective captures invariant information from causal variables, while the independence objective reduces confounder influence on causal variables. The approach is demonstrated to be effective for node classification tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at a way called Graph Contrastive Learning (GCL) that helps computers learn about graphs. GCL tries to make computers understand what’s important in a graph, but does it always work? The authors of this paper think about GCL in a different way by using something called Structural Causal Models. They find out that sometimes GCL doesn’t do the job because there’s extra information in the graph that shouldn’t be counted. To fix this, they come up with a new way to make GCL work better. This new way uses special tricks like “spectral graph augmentation” and “invariance objectives”. It seems to help computers make good decisions when classifying nodes.

Keywords

* Artificial intelligence  * Classification  * Self supervised