Loading Now

Summary of Cam-based Methods Can See Through Walls, by Magamed Taimeskhanov and Ronan Sicre and Damien Garreau


CAM-Based Methods Can See through Walls

by Magamed Taimeskhanov, Ronan Sicre, Damien Garreau

First submitted to arxiv on: 2 Apr 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Medium Difficulty summary: CAM-based methods are widely-used post-hoc interpretability techniques that generate saliency maps to explain image classification models’ predictions. Our research reveals that most of these methods incorrectly attribute importance to unseen parts of images, both theoretically and experimentally. We analyze GradCAM’s behavior on a simple masked CNN model at initialization and observe this phenomenon in practice with a VGG-like model constrained not to use the lower part of an image yet still producing positive scores. This issue is quantitatively evaluated on two new datasets, highlighting the potential for mis-interpretation of models’ behaviors.
Low GrooveSquid.com (original content) Low Difficulty Summary
Low Difficulty summary: Imagine you’re trying to understand how a computer program works. One way is to look at what parts of an image are most important for its prediction. Some methods do this by creating a map that shows which parts of the image matter. But we found out that many of these methods have a problem – they often incorrectly say that some parts of the image are important when those parts aren’t even visible! We showed this can happen both in theory and in real-life experiments with special computer models. This is bad because it could lead to us misunderstanding how these models work.

Keywords

* Artificial intelligence  * Cnn  * Image classification