Loading Now

Summary of Bag Of Tricks to Boost Adversarial Transferability, by Zeliang Zhang et al.


Bag of Tricks to Boost Adversarial Transferability

by Zeliang Zhang, Wei Yao, Xiaosen Wang

First submitted to arxiv on: 16 Jan 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Deep neural networks are vulnerable to adversarial examples, but traditional attacks often lack transferability across different models. To address this issue, researchers have proposed various approaches, including gradient-based, input transformation-based, and model-related attacks. This paper investigates how small changes in existing attacks can impact performance and proposes a set of “bag of tricks” to enhance adversarial transferability. The techniques include momentum initialization, scheduled step size, dual example, spectral-based input transformation, and ensemble strategies. Experiments on the ImageNet dataset demonstrate the effectiveness of these methods and show that combining them can further improve transferability.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making sure bad attacks on deep neural networks are harder to get away with. Right now, attackers can easily create fake data that will trick a network into making mistakes. But what if we could make it so those attacks don’t work as well on other networks? The researchers looked at how small changes in existing attack methods affect their performance and found some simple tricks that can make them better. They tested these tricks on a big dataset of images and showed that they really do help.

Keywords

* Artificial intelligence  * Transferability