Loading Now

Summary of Robustness Bounds on the Successful Adversarial Examples: Theory and Practice, by Hiroaki Maeshima et al.


Robustness Bounds on the Successful Adversarial Examples: Theory and Practice

by Hiroaki Maeshima, Akira Otsuka

First submitted to arxiv on: 4 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the upper bound of the probability of successful adversarial examples (AE) in Gaussian Process (GP) classification. The authors prove a new theoretical upper bound that depends on AE’s perturbation norm, GP kernel function, and distance between closest pairs with different labels in the training dataset. Interestingly, this bound is independent of the sample dataset distribution. Experimental results using ImageNet confirm the theoretical findings. Furthermore, the study shows that modifying kernel function parameters affects the upper bound of successful AEs.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper studies how to limit the effectiveness of attacks on machine learning models. The researchers find a way to estimate how likely these attacks are to succeed. They do this by using a technique called Gaussian Process and show that their approach works well with real-world images. This could help make AI systems more secure.

Keywords

* Artificial intelligence  * Classification  * Machine learning  * Probability