Loading Now

Summary of One Wave to Explain Them All: a Unifying Perspective on Post-hoc Explainability, by Gabriel Kasmi and Amandine Brunetto and Thomas Fel and Jayneel Parekh


One Wave to Explain Them All: A Unifying Perspective on Post-hoc Explainability

by Gabriel Kasmi, Amandine Brunetto, Thomas Fel, Jayneel Parekh

First submitted to arxiv on: 2 Oct 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Despite the growing use of deep neural networks in safety-critical decision-making, their black-box nature hinders transparency and interpretability. Explainable AI (XAI) methods aim to understand a model’s internal workings, with attribution methods like saliency maps identifying significant regions within an input. However, conventional attribution methods overlook the structure of the input data, often failing to interpret what these regions represent. To address this limitation, we propose leveraging the wavelet domain as a robust mathematical foundation for attribution. Our Wavelet Attribution Method (WAM) extends gradient-based feature attributions into the wavelet domain, providing a unified framework for explaining classifiers across images, audio, and 3D shapes. Empirical evaluations demonstrate that WAM matches or surpasses state-of-the-art methods in image, audio, and 3D explainability.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making artificial intelligence (AI) more understandable. Right now, AI models are like black boxes – we can’t see how they make decisions. To fix this, researchers have developed ways to “explain” what the model is doing. One way is by looking at which parts of the input data are most important. However, current methods don’t take into account the underlying structure of the data, which makes it hard to understand what these important parts mean. The authors propose a new method that uses wavelet analysis to identify patterns in the data and explain why they’re important.

Keywords

* Artificial intelligence