Summary of Perturbation Towards Easy Samples Improves Targeted Adversarial Transferability, by Junqi Gao et al.
Perturbation Towards Easy Samples Improves Targeted Adversarial Transferability
by Junqi Gao, Biqing Qi, Yao Li, Zhichang Guo, Dong Li, Yuming Xing, Dazhi Zhang
First submitted to arxiv on: 8 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper explores the transferability of adversarial perturbations in neural networks, focusing on targeted attacks. The authors demonstrate that networks trained on the same dataset exhibit consistent performance in high-density regions (HSDR) of each class, rather than low-density regions. This insight is used to improve targeted attack transferability by adding perturbations towards HSDR of the target class. The proposed Easy Sample Matching Attack (ESMA) strategy outperforms state-of-the-art generative methods while requiring significantly less storage and computation resources. ESMA attacks all classes with a single model, whereas current methods require separate models for each class. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us understand how to trick artificial intelligence (AI) systems in a more efficient way. Researchers found that AI models are better at recognizing things when they have lots of information about those things. This means we can make the AI models more confused by adding tiny changes, making it harder for them to recognize what’s real and what’s not. The new attack method, called Easy Sample Matching Attack (ESMA), is better than existing methods because it uses less computer power and storage space. |
Keywords
» Artificial intelligence » Transferability