Loading Now

Summary of Ahsg: Adversarial Attacks on High-level Semantics in Graph Neural Networks, by Kai Yuan et al.


AHSG: Adversarial Attacks on High-level Semantics in Graph Neural Networks

by Kai Yuan, Xiaobing Pei, Haoran Yang

First submitted to arxiv on: 10 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces Adversarial Attacks on High-level Semantics in Graph Neural Networks (AHSG), a novel method for disrupting secondary semantic information in GNNs while preserving primary semantics. Building upon existing adversarial attack methods, AHSG leverages convolutional operations to extract rich semantic information from graph data. The proposed approach uses Projected Gradient Descent (PGD) algorithm to map latent representations with attack effects to an attack graph. Experimental results demonstrate the superiority of AHSG in terms of attack effectiveness compared to other attack methods. Additionally, the paper employs Contextual Stochastic Block Models (CSBMs) as a proxy for primary semantics to detect attacked graphs, confirming that AHSG does not disrupt original primary semantics.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about creating new kinds of attacks on Graph Neural Networks (GNNs). These attacks aim to change how GNNs understand secondary information while keeping the main information the same. The researchers propose a new method called Adversarial Attacks on High-level Semantics in Graph Neural Networks (AHSG) that can do this. They test their method and show that it’s more effective than existing methods at attacking GNNs. This paper is important because it helps us understand how to protect GNNs from these kinds of attacks.

Keywords

» Artificial intelligence  » Gradient descent  » Semantics