Loading Now

Summary of Jma: a General Algorithm to Craft Nearly Optimal Targeted Adversarial Example, by Benedetta Tondi et al.


JMA: a General Algorithm to Craft Nearly Optimal Targeted Adversarial Example

by Benedetta Tondi, Wei Guo, Mauro Barni

First submitted to arxiv on: 2 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This research proposes a novel targeted adversarial example approach for deep learning classifiers. The traditional methods are suboptimal and rely on increasing the likelihood of the target class, which is not suitable for one-hot encoding settings. Instead, this paper introduces a theoretically sound attack that minimizes a Jacobian-induced Mahalanobis distance (JMA) term, considering the effort required to move the latent space representation of the input sample in a given direction. The algorithm solves this problem using Wolfe duality theorem and Non-Negative Least Square (NNLS) techniques. This approach provides an optimal solution to the linearized version of the adversarial example problem originally introduced by Szegedy et al. [1]. Experimental results confirm the generality and effectiveness of this attack, which is also efficient in multi-label classification scenarios.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper creates a new way to trick deep learning computers into making mistakes. Usually, people try to make the computer think something is true when it’s not. But this approach makes the computer do something specific, like change its mind about one or more labels. This is done by minimizing a special distance between two things: the input and what the computer wants to see. The math behind this is complicated, but it works! It even works in situations where there are many different labels, which is important because that’s how we categorize things like animals or movies.

Keywords

* Artificial intelligence  * Classification  * Deep learning  * Latent space  * Likelihood  * One hot