Summary of Backdoor Graph Condensation, by Jiahao Wu and Ning Lu and Zeiyu Dai and Wenqi Fan and Shengcai Liu and Qing Li and Ke Tang
Backdoor Graph Condensation
by Jiahao Wu, Ning Lu, Zeiyu Dai, Wenqi Fan, Shengcai Liu, Qing Li, Ke Tang
First submitted to arxiv on: 3 Jul 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel technique called backdoor graph condensation to improve the training efficiency for graph neural networks (GNNs). Graph condensation is a method that compresses a large graph into a smaller one, allowing GNNs trained on this condensed graph to achieve similar performance to those trained on the original large graph. The authors focus on finding the best balance between the size of the condensed graph and the GNN’s performance (model utility). However, they recognize that existing studies have not considered the security implications of graph condensation. To address this gap, they introduce the concept of backdoor graph condensation. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making it easier to train computers on big networks by shrinking them down to smaller ones. It’s like taking a huge map and reducing it to a simpler one that still shows important information. The goal is to find the right balance between how small the network gets and how well the computer can learn from it. But until now, nobody has thought about what might happen if someone tries to trick the computer by adding fake information to the smaller network. This paper wants to fix that by studying something called backdoor graph condensation. |
Keywords
* Artificial intelligence * Gnn