Summary of Boosting the Transferability Of Adversarial Examples Via Local Mixup and Adaptive Step Size, by Junlin Liu and Xinchen Lyu
Boosting the Transferability of Adversarial Examples via Local Mixup and Adaptive Step Size
by Junlin Liu, Xinchen Lyu
First submitted to arxiv on: 24 Jan 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed black-box adversarial generative framework is a crucial security measure against visual applications, as it tackles the issue of human-imperceptible perturbations confusing machine learning models. The framework jointly designs enhanced input diversity and adaptive step sizes by leveraging local mixup to randomly mix transformed images and projecting perturbations into the tanh space. This allows for dynamic adjustment of step sizes based on image regions’ weights in classification, leading to superior transferability compared to state-of-the-art baselines. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper creates a special kind of fake pictures that can trick computer programs. These “adversarial examples” are hard to detect because they’re very small and only affect certain parts of the picture. The researchers came up with a new way to make these fake pictures, which works better than other methods tried before. They tested their method on a big dataset of images and showed that it’s more effective at fooling computer programs. |
Keywords
» Artificial intelligence » Classification » Machine learning » Tanh » Transferability