Loading Now

Summary of Towards Symbolic Xai — Explanation Through Human Understandable Logical Relationships Between Features, by Thomas Schnake et al.


Towards Symbolic XAI – Explanation Through Human Understandable Logical Relationships Between Features

by Thomas Schnake, Farnoush Rezaei Jafari, Jonas Lederer, Ping Xiong, Shinichi Nakajima, Stefan Gugler, Grégoire Montavon, Klaus-Robert Müller

First submitted to arxiv on: 30 Aug 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a new Explainable Artificial Intelligence (XAI) framework, called Symbolic XAI, which attributes relevance to symbolic queries expressing logical relationships between input features. This approach aims to capture the abstract reasoning behind a model’s predictions, going beyond traditional XAI methods that focus on highlighting single or multiple input features. The methodology is based on a simple yet general multi-order decomposition of model predictions and can be specified using higher-order propagation-based relevance methods like GNN-LRP or perturbation-based explanation methods commonly used in XAI. The framework is demonstrated to be effective in various domains, including natural language processing, computer vision, and quantum chemistry, where abstract symbolic domain knowledge is abundant and valuable. Symbolic XAI provides a flexible and human-readable understanding of the model’s decision-making process.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper introduces a new way to explain how artificial intelligence (AI) makes decisions. Currently, AI explanations focus on individual features or patterns, but this paper asks whether we can understand the bigger picture, like the abstract thinking behind an AI’s choices? The authors propose a new approach called Symbolic XAI, which uses logical formulas to show how different pieces of information relate to each other. This helps us understand not just what AI is doing, but why it’s making certain decisions. The method is tested in several areas, such as language processing and image recognition, where it provides valuable insights into the AI’s thinking.

Keywords

* Artificial intelligence  * Gnn  * Natural language processing