Loading Now

Summary of Cnn-based Explanation Ensembling For Dataset, Representation and Explanations Evaluation, by Weronika Hryniewska-guzik et al.


CNN-based explanation ensembling for dataset, representation and explanations evaluation

by Weronika Hryniewska-Guzik, Luca Longo, Przemysław Biecek

First submitted to arxiv on: 16 Apr 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores the potential of combining explanations generated by deep classification models using convolutional neural networks (CNNs) for Explainable Artificial Intelligence (XAI). The authors investigate how ensembling these explanations can lead to a more coherent and reliable understanding of the model’s behavior, enabling evaluation of its representation learning. The proposed method uncovers problems of under-representation in certain image classes and reduces features by replacing images with their explanations, removing sensitive information. Evaluation metrics from the Quantus library demonstrate superior performance in terms of localization and faithfulness compared to individual explanations.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about using artificial intelligence (AI) to help people understand how AI models work. Right now, AI is used for important tasks like medicine and self-driving cars, but it’s hard to know why the AI makes certain decisions. The researchers want to figure out a way to combine different explanations of an AI model’s behavior to get a more accurate picture. They use a special kind of computer program called a convolutional neural network (CNN) and show that their method is better than just looking at one explanation alone. This could help us learn more about how AI models work and make them safer and more reliable.

Keywords

» Artificial intelligence  » Classification  » Cnn  » Neural network  » Representation learning