Loading Now

Summary of Disttack: Graph Adversarial Attacks Toward Distributed Gnn Training, by Yuxiang Zhang et al.


Disttack: Graph Adversarial Attacks Toward Distributed GNN Training

by Yuxiang Zhang, Xin Liu, Meng Wu, Wei Yan, Mingyu Yan, Xiaochun Ye, Dongrui Fan

First submitted to arxiv on: 10 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers develop a novel approach to attacking Graph Neural Networks (GNNs) that are trained in a distributed manner. The authors highlight the limitations of current adversarial attack methods on GNNs, which neglect the characteristics and applications of the distributed scenario. To address these limitations, they propose a new method that takes into account the distributed nature of the training process. This approach enables more effective attacks on distributed GNN training, improving the performance and efficiency of the attacks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about creating ways to make Graph Neural Networks (GNNs) that learn from graphs not work well. Right now, people are using computers all over the place to train these networks, but some bad guys might want to attack them. The researchers found out that current methods for attacking GNNs aren’t very good because they don’t think about how the training happens on lots of computers at once. So, they came up with a new way to make the attacks better and more efficient.

Keywords

» Artificial intelligence  » Gnn