Loading Now

Summary of Cnn Explainability with Multivector Tucker Saliency Maps For Self-supervised Models, by Aymene Mohammed Bouayed and Samuel Deslauriers-gauthier and Adrian Iaccovelli and David Naccache


CNN Explainability with Multivector Tucker Saliency Maps for Self-Supervised Models

by Aymene Mohammed Bouayed, Samuel Deslauriers-Gauthier, Adrian Iaccovelli, David Naccache

First submitted to arxiv on: 30 Oct 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Tucker Saliency Map (TSM) method applies Tucker tensor decomposition to generate saliency maps for Convolutional Neural Networks (CNNs), particularly suitable for self-supervised models. Building upon EigenCAM, TSM leverages the inherent structure of feature maps to produce more accurate singular vectors and values. The resulting saliency maps effectively highlight objects of interest in the input. This method is extended into multivector variants -Multivec-EigenCAM and Multivector Tucker Saliency Maps (MTSM)- which utilize all singular vectors and values, further improving saliency map quality. Quantitative evaluations demonstrate competitive performance with label-dependent methods on supervised classification models and enhanced explainability by approximately 50% over EigenCAM for both supervised and self-supervised models.
Low GrooveSquid.com (original content) Low Difficulty Summary
Tucker Saliency Map (TSM) is a new way to understand how Convolutional Neural Networks (CNNs) make decisions. Most methods need labels, but TSM works without them. It uses a special kind of math called Tucker tensor decomposition to create saliency maps that show what’s important in the input. This helps us see which objects or parts are being focused on. The method is good for both labeled and unlabeled data and can even help self-supervised models make better decisions.

Keywords

» Artificial intelligence  » Classification  » Self supervised  » Supervised