Loading Now

Summary of On the Robustness Of Graph Reduction Against Gnn Backdoor, by Yuxuan Zhu et al.


On the Robustness of Graph Reduction Against GNN Backdoor

by Yuxuan Zhu, Michael Mandulak, Kerui Wu, George Slota, Yuseok Jeon, Ka-Ho Chow, Lei Yu

First submitted to arxiv on: 2 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Graph Neural Networks (GNNs) are a popular choice for processing graph-structured data due to their effectiveness. However, they can be vulnerable to backdoor poisoning attacks, which pose significant risks to real-world applications. Graph reduction techniques, such as coarsening and sparsification, have been shown to improve the scalability of GNN training on large-scale graphs. This paper investigates how graph reduction methods interact with existing backdoor attacks in scalable GNN training. We evaluated six coarsening methods and six sparsification methods for graph reduction under three GNN backdoor attacks against three GNN architectures. Our results indicate that the effectiveness of graph reduction methods varies significantly, with some methods even exacerbating the attacks.
Low GrooveSquid.com (original content) Low Difficulty Summary
GNNs are special kinds of computer programs that can learn from data with connections between pieces. They’re good at understanding complex relationships in things like social networks or chemical reactions. But sometimes, bad people might try to trick these programs by adding fake information. This could happen when we use special tricks to make the program work faster on very large datasets. We don’t know yet if these tricks would help or hurt the program’s ability to detect this kind of cheating. In this study, we looked at how well some of these tricks work in keeping GNNs safe from attacks. We tested six ways to make the program work faster and six types of attacks against three different versions of the program. Our results show that some tricks actually made things worse, while others helped a little. This means we need to think carefully about how we use these tricks so they don’t accidentally make our programs less secure.

Keywords

» Artificial intelligence  » Gnn