Loading Now

Summary of Towards Explainable Artificial Intelligence (xai): a Data Mining Perspective, by Haoyi Xiong and Xuhong Li and Xiaofei Zhang and Jiamin Chen and Xinhao Sun and Yuchen Li and Zeyi Sun and Mengnan Du


Towards Explainable Artificial Intelligence (XAI): A Data Mining Perspective

by Haoyi Xiong, Xuhong Li, Xiaofei Zhang, Jiamin Chen, Xinhao Sun, Yuchen Li, Zeyi Sun, Mengnan Du

First submitted to arxiv on: 9 Jan 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This research paper presents a novel approach to explainable AI (XAI) by examining the role of data collection, processing, and analysis in making deep neural networks more interpretable. The authors categorize existing work into three categories: interpretations of deep models, influences of training data, and insights of domain knowledge. They distill XAI methodologies into data mining operations on various data modalities, including images, text, and tabular data, as well as on training logs, checkpoints, and model behavior descriptors. This comprehensive study offers a data-centric examination of XAI from a lens of data mining methods and applications.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making artificial intelligence (AI) more understandable. Right now, AI systems are very good at doing things like recognizing pictures or understanding speech, but we don’t really know how they do it. This study looks at the role that data plays in helping us understand why AI makes certain decisions. The authors group previous research into three categories: interpreting what AI is doing, looking at how the training data affects AI’s behavior, and using domain knowledge to gain new insights from AI. They also show how data mining techniques can be used to make AI more transparent.

Keywords

* Artificial intelligence