Loading Now

Summary of Bounding the Expected Robustness Of Graph Neural Networks Subject to Node Feature Attacks, by Yassine Abbahaddou et al.


Bounding the Expected Robustness of Graph Neural Networks Subject to Node Feature Attacks

by Yassine Abbahaddou, Sofiane Ennadir, Johannes F. Lutzeyer, Michalis Vazirgiannis, Henrik Boström

First submitted to arxiv on: 27 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Graph Neural Networks (GNNs) have achieved state-of-the-art results in various graph representation learning tasks. However, recent studies revealed their vulnerability to adversarial attacks. This paper theoretically defines expected robustness for attributed graphs and relates it to classical adversarial robustness in graph representation learning. The definition allows deriving an upper bound of Graph Convolutional Networks (GCNs) and Graph Isomorphism Networks’ expected robustness against node feature attacks. Building on these findings, the authors connect expected robustness to orthonormality of GNN weight matrices and propose a more robust variant, called Graph Convolutional Orthonormal Robust Networks (GCORNs). A probabilistic method is introduced to estimate expected robustness, allowing evaluation on real-world datasets. Experimental results show GCORN outperforms available defense methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
GNNs are super smart at learning about graphs, but they can be tricked by fake data. This paper helps us understand how GNNs work when someone tries to make them fail. They define a new way of thinking about GNNs being “tough” and show that some GNNs are better than others at not falling for tricks. The researchers also created a new, more robust version of one type of GNN called GCORNs. They tested it on real data and found it worked really well.

Keywords

» Artificial intelligence  » Gnn  » Representation learning