Summary of Adapt to Robustify Prompt Tuning Vision Transformers, by Masih Eskandar et al.
ADAPT to Robustify Prompt Tuning Vision Transformers
by Masih Eskandar, Tooba Imtiaz, Zifeng Wang, Jennifer Dy
First submitted to arxiv on: 19 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper explores the vulnerability of deep models, including Vision Transformers, to adversarial attacks and proposes a novel framework for adaptive adversarial training in the prompt tuning paradigm. The authors examine parameter-efficient prompt tuning of Vision Transformers for downstream tasks under the lens of robustness, showing that previous defenses suffer from gradient obfuscation and are vulnerable to adaptive attacks. The proposed ADAPT framework achieves competitive robust accuracy (~40%) using only ~1% of the number of parameters compared to full-model fine-tuning. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making deep learning models more secure against fake data. Right now, big models can be tricked into doing the wrong thing with special “adversarial” data. Researchers have been trying to make these models more robust by training them on fake data too. But this requires storing a lot of information, which takes up a lot of space and time. This paper shows that this approach doesn’t work when we’re using small prompts to adapt the model to a new task. They introduce a new way to train the model called ADAPT, which is better at making the model secure without needing to store lots of data. |
Keywords
* Artificial intelligence * Deep learning * Fine tuning * Parameter efficient * Prompt