Summary of Edge Private Graph Neural Networks with Singular Value Perturbation, by Tingting Tang et al.
Edge Private Graph Neural Networks with Singular Value Perturbation
by Tingting Tang, Yue Niu, Salman Avestimehr, Murali Annavaram
First submitted to arxiv on: 16 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR); Social and Information Networks (cs.SI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A machine learning educator writing for a technical audience will find this paper proposing a new privacy-preserving GNN training algorithm, Eclipse. Eclipse maintains good model utility while providing strong privacy protection on edges, addressing the vulnerability of GNN training pipelines to node feature leakage and edge extraction attacks. The approach is based on two key observations: adjacency matrices exhibit low-rank behavior, allowing for training with a low-rank format via singular value decomposition (SVD); and adding noise to the low-rank singular values preserves graph privacy while maintaining model utility. Eclipse provides formal differential privacy guarantee on edges and achieves better privacy-utility tradeoff compared to existing methods. Experiments show significant gains in model utility under strong privacy constraints and improved resilience against edge attacks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper proposes a new way to keep private information hidden when using graph neural networks (GNNs). GNNs are used in many applications, but they can be vulnerable to attacks that try to steal sensitive data. The authors want to make sure that the trained GNN models don’t reveal too much about the original graph structure. They propose an algorithm called Eclipse that maintains good performance while keeping the private information safe. Eclipse works by using a special format for the graph, reducing the noise added and preserving the main graph structure. |
Keywords
* Artificial intelligence * Gnn * Machine learning