Loading Now

Summary of Krait: a Backdoor Attack Against Graph Prompt Tuning, by Ying Song et al.


Krait: A Backdoor Attack Against Graph Prompt Tuning

by Ying Song, Rita Singh, Balaji Palanisamy

First submitted to arxiv on: 18 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the vulnerability of graph prompt tuning to backdoor attacks in few-shot contexts. Graph prompt tuning has been shown to be effective in transferring general graph knowledge from pre-trained models to various downstream tasks. However, its susceptibility to backdoor attacks raises a critical concern. The authors introduce Krait, a novel graph prompt backdoor that can efficiently embed triggers to merely 0.15% to 2% of training nodes, achieving high attack success rates without sacrificing clean accuracy. They also propose three customizable trigger generation methods and a centroid similarity-based loss function to optimize prompt tuning for attack effectiveness and stealthiness. The authors conduct experiments on four real-world graphs and show that Krait can achieve 100% attack success rates by poisoning as few as 2 and 22 nodes, respectively.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about how some graph prompts are vulnerable to tricks that make them do the wrong thing. This is a problem because it could be used to trick artificial intelligence into making bad decisions. The authors created a new way to create these tricks called Krait. They tested Krait on four different graphs and found that it can work even when only a small number of nodes are changed. They also came up with ways to make the tricks more effective and harder to detect.

Keywords

» Artificial intelligence  » Few shot  » Loss function  » Prompt