Loading Now

Summary of Verifying Relational Explanations: a Probabilistic Approach, by Abisha Thapa Magar et al.


Verifying Relational Explanations: A Probabilistic Approach

by Abisha Thapa Magar, Anup Shakya, Somdeb Sarkhel, Deepak Venugopal

First submitted to arxiv on: 5 Jan 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach to verifying interpretable explanations generated by Graph Neural Networks (GNNs), specifically those produced by GNNExplainer. The existing method relies on human subjects, which is time-consuming and requires expertise. To scale up the verification process, the authors develop an uncertainty quantification framework that learns a factor graph model from counterfactual examples. These examples are generated as symmetric approximations of the relational structure in the original data. The approach is tested on several datasets, demonstrating its effectiveness in reliably estimating the uncertainty of relational explanations.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us understand how to make sense of complex data relationships by checking if the reasons given for a prediction are correct. Usually, humans are used to verify these reasons because they don’t need special knowledge. However, verifying the quality of these reasons requires expertise and is hard to do on a large scale. The authors develop a new way to check the uncertainty of these reasons, using examples that reverse the relationships in the original data. This approach shows promising results on different datasets, allowing us to trust the explanations generated by GNNs.

Keywords

» Artificial intelligence