Loading Now

Summary of Graph Unlearning with Efficient Partial Retraining, by Jiahao Zhang et al.


Graph Unlearning with Efficient Partial Retraining

by Jiahao Zhang, Lin Wang, Shijie Wang, Wenqi Fan

First submitted to arxiv on: 12 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Medium Difficulty summary: Graph Neural Networks (GNNs) have achieved significant success in various real-world applications, but their performance and reliability may degrade when trained on undesirable graph data. To address this issue, a retraining-based graph unlearning method partitions the training graph into subgraphs, allowing for efficient unlearning through partial retraining. However, the graph partition process causes information loss, resulting in low model utility for sub-GNN models. This paper proposes GraphRevoker, a novel framework that preserves model utility by using graph property-aware sharding and aggregating sub-GNN models for prediction with graph contrastive sub-model aggregation. The authors conduct extensive experiments to demonstrate the superiority of their proposed approach.
Low GrooveSquid.com (original content) Low Difficulty Summary
Low Difficulty summary: Graph Neural Networks are really good at doing certain tasks, but sometimes they can be tricked into thinking things that aren’t true. To fix this problem, you need a way to “unlearn” what they’ve learned from bad data. One way to do this is by dividing the training data into smaller pieces and retraining the model on each piece. But this process has its own problems because it can cause the model to lose some of its ability to make good predictions. This paper proposes a new approach called GraphRevoker that solves these problems by preserving the important information in the data and combining the results from multiple models.

Keywords

* Artificial intelligence  * Gnn