Loading Now

Summary of Pac Learnability Under Explanation-preserving Graph Perturbations, by Xu Zheng et al.


PAC Learnability under Explanation-Preserving Graph Perturbations

by Xu Zheng, Farhad Shirani, Tianchun Wang, Shouwei Gao, Wenqian Dong, Wei Cheng, Dongsheng Luo

First submitted to arxiv on: 7 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores ways to leverage graph neural networks (GNNs) in graphical models, which capture complex relationships between entities. The authors introduce a concept called graph explanations, which provide an almost sufficient statistic of the input graph with respect to its classification label. Two methods are proposed: explanation-assisted learning rules and explanation-assisted data augmentation. The former can significantly reduce sample complexity, while the latter may improve performance if the augmented data is in-distribution. However, it may also lead to worse sample complexity if the data is out-of-distribution.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper talks about a new way to make artificial intelligence (AI) work better with complex data that has many connections between things, like social networks or biology. They created a special kind of model called graph neural networks that can understand these connections. The authors found two ways to make this model even better: one is to use the connections in the data to help the model learn faster, and another is to add fake data to the model’s training set by slightly changing some of the connections. They tested these methods and showed that they work well when used correctly.

Keywords

* Artificial intelligence  * Classification  * Data augmentation