Loading Now

Summary of Towards Effective and General Graph Unlearning Via Mutual Evolution, by Xunkai Li et al.


Towards Effective and General Graph Unlearning via Mutual Evolution

by Xunkai Li, Yulin Zhao, Zhengyu Wu, Wentao Zhang, Rong-Hua Li, Guoren Wang

First submitted to arxiv on: 22 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Neural and Evolutionary Computing (cs.NE); Social and Information Networks (cs.SI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This research proposes a new approach to machine unlearning on graph-based scenarios, addressing the growing needs for data privacy and model robustness. The Mutual Evolution Graph Unlearning (MEGU) paradigm simultaneously evolves predictive and unlearning capacities of graph unlearning in a unified training framework. This method outperforms state-of-the-art baselines by 2.7%, 2.5%, and 3.2% on average across feature, node, and edge levels of unlearning tasks. Additionally, MEGU exhibits satisfactory training efficiency, reducing time and space overhead by an average of 159.8x and 9.6x respectively compared to retraining GNN from scratch.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper introduces a new method for machine unlearning on graphs, called Mutual Evolution Graph Unlearning (MEGU). This approach combines predictive and unlearning capacities in a single training framework. The results show that MEGU performs better than other methods by 2-3% on average. It also takes less time and space to train compared to retraining the model from scratch.

Keywords

* Artificial intelligence  * Gnn