Loading Now

Summary of Impact Of Adversarial Attacks on Deep Learning Model Explainability, by Gazi Nazia Nur et al.


Impact of Adversarial Attacks on Deep Learning Model Explainability

by Gazi Nazia Nur, Mohammad Ahnaf Sadat

First submitted to arxiv on: 15 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates how deep learning models’ explainability is affected when they’re subjected to subtle image perturbations, known as adversarial attacks. These attacks can significantly mislead models while remaining imperceptible to humans. The authors utilize attack methods like FGSM and BIM, observing their impact on model accuracy and explanations. They find that model accuracy drops substantially, from 89.94% to 58.73% and 45.50%, respectively, but metrics like IoU and RMSE show negligible changes, suggesting these metrics may not detect adversarial perturbations.
Low GrooveSquid.com (original content) Low Difficulty Summary
Deep learning models are super smart at recognizing things in pictures, but they can be tricky to understand because they don’t explain how they made their decisions. This paper looks into what happens when we try to trick these models with tiny image changes that humans can’t notice, called adversarial attacks. The results show that even though the models get worse at guessing what’s in the picture, their explanations stay pretty much the same.

Keywords

» Artificial intelligence  » Deep learning