Loading Now

Summary of Gaim: Attacking Graph Neural Networks Via Adversarial Influence Maximization, by Xiaodong Yang et al.


GAIM: Attacking Graph Neural Networks via Adversarial Influence Maximization

by Xiaodong Yang, Xiaoting Li, Huiyuan Chen, Yiwei Cai

First submitted to arxiv on: 20 Aug 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel integrated adversarial attack method for Graph Neural Networks (GNNs) is presented, addressing limitations in existing approaches. GAIM is a node feature-based attack that considers the black-box setting, reframing the problem as an adversarial influence maximization task. The approach unifies target node selection and feature perturbation construction into a single optimization problem, using a surrogate model to streamline the process. The method is extended to accommodate label-oriented attacks, evaluated on five benchmark datasets across three popular GNN models, demonstrating effectiveness in both untargeted and targeted attacks.
Low GrooveSquid.com (original content) Low Difficulty Summary
GAIM is an attack method for Graph Neural Networks that makes it harder for these networks to make good predictions. Instead of trying to fool the network all at once, GAIM focuses on individual nodes or points on a graph. It’s like trying to distract someone from paying attention to something important by making small changes around them. This approach is better than others because it considers how the network might react to different types of distractions, and it makes sure that each distraction is unique and consistent. The method was tested on several datasets and showed that it can be very effective at fooling GNNs.

Keywords

» Artificial intelligence  » Attention  » Gnn  » Optimization