Loading Now

Summary of Debiased Graph Poisoning Attack Via Contrastive Surrogate Objective, by Kanghoon Yoon et al.


Debiased Graph Poisoning Attack via Contrastive Surrogate Objective

by Kanghoon Yoon, Yeonjun In, Namkyeong Lee, Kibum Kim, Chanyoung Park

First submitted to arxiv on: 27 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates vulnerabilities in Graph Neural Networks (GNNs) and proposes a new attack method to degrade their performance. GNNs are susceptible to imperceptible changes on the graph, known as adversarial attacks. However, existing meta-gradient-based attacks are biased towards training nodes, resulting in uneven perturbations that only affect edges connecting labeled nodes. The proposed Metacon attack method uses a contrastive surrogate objective to alleviate this bias and outperforms existing methods on benchmark datasets. By understanding the root cause of these biases, researchers can develop more effective attacks to test the robustness of GNNs.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how Graph Neural Networks (GNNs) can be tricked into making mistakes by small changes in the graph. Researchers have found that current methods for doing this are unfair because they only target edges connected to labeled nodes, not unlabeled ones. The new method proposed in this paper is fairer and more effective at finding these weaknesses. By understanding how GNNs work and what makes them vulnerable, scientists can create better tests to see how robust these networks are.

Keywords

* Artificial intelligence