Summary of On the Adversarial Transferability Of Generalized “skip Connections”, by Yisen Wang et al.
On the Adversarial Transferability of Generalized “Skip Connections”
by Yisen Wang, Yichuan Mo, Dongxian Wu, Mingjie Li, Xingjun Ma, Zhouchen Lin
First submitted to arxiv on: 11 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper investigates the effects of skip connections on the transferability of adversarial examples in deep learning models. Specifically, it finds that using more gradients from skip connections than residual modules during backpropagation allows for the generation of highly transferable adversarial examples. This method is dubbed Skip Gradient Method (SGM) and is shown to improve transferability across various architectures and domains, including ResNets, Transformers, Inceptions, Neural Architecture Search, and Large Language Models. The paper also demonstrates that SGM can be used to improve the stealthiness of attacks against current defenses. Furthermore, it provides theoretical explanations and empirical insights on how SGM works. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper looks at how skip connections in deep learning models make them more powerful by making it easier to create “attacks” that trick the model into thinking something is real when it’s not. They found a way to use these skip connections to make the attacks even better and more sneaky. This could be a problem for people who want to keep their models safe from being fooled. |
Keywords
» Artificial intelligence » Backpropagation » Deep learning » Transferability