Summary of Exploiting the Layered Intrinsic Dimensionality Of Deep Models For Practical Adversarial Training, by Enes Altinisik et al.
Exploiting the Layered Intrinsic Dimensionality of Deep Models for Practical Adversarial Training
by Enes Altinisik, Safa Messaoud, Husrev Taha Sencar, Hassan Sajjad, Sanjay Chawla
First submitted to arxiv on: 27 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A new Adversarial Training (AT) algorithm called SMAAT is proposed to improve the scalability and effectiveness of AI systems. The existing limitations of AT include a drop in generalization and high computational costs for generating adversarial examples. SMAAT leverages the manifold conjecture, which suggests that off-manifold adversarial examples lead to better robustness while on-manifold examples result in better generalization. By perturbing the intermediate deepnet layer with the lowest intrinsic dimension, SMAAT aims to generate more off-manifold adversarial examples. This approach reduces the length of PGD chains required for generating AEs and improves scalability compared to classical AT. The paper also explains why the trends in generalization and robustness differ between vision and language models. SMAAT is demonstrated to be effective in various applications, including sentiment classification, safety filters, and retrievers, with improved robustness and comparable generalization. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper talks about how to make AI systems more reliable and better at handling tricky situations. The current way of making AI more robust (Adversarial Training) has some big limitations. It can actually make the AI worse at doing its main job, and it takes a lot of computer power to work. The new method, called SMAAT, tries to fix these problems by changing how it creates “adversarial examples” that help train the AI. This new way is faster and more effective, and it works better for different types of AI systems. SMAAT helps make AI more reliable and good at handling tough situations. |
Keywords
» Artificial intelligence » Classification » Generalization