Loading Now

Summary of Adversarial Prompt Distillation For Vision-language Models, by Lin Luo et al.


Adversarial Prompt Distillation for Vision-Language Models

by Lin Luo, Xin Wang, Bojia Zi, Shihao Zhao, Xingjun Ma

First submitted to arxiv on: 22 Nov 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Adversarial Prompt Distillation (APD) method combines Adversarial Prompt Tuning (APT) with knowledge distillation to boost the adversarial robustness of Contrastive Language-Image Pre-Training (CLIP) models. APD is a bimodal approach that adds prompts for both visual and textual modalities, leveraging a cleanly pre-trained teacher CLIP model to improve student performance on downstream tasks. This method surpasses current state-of-the-art APT methods in terms of natural and adversarial performances on multiple benchmark datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
The researchers developed a new way to make vision-language models (like CLIP) more robust against fake inputs that can trick them into making mistakes. They combined two existing techniques: adding special prompts to the model, and training it with bad data to make it stronger. This new method works by adding prompts for both pictures and text, and using a good teacher model to help a student model get better at its job. The result is a more reliable model that does well on real tasks and can withstand fake inputs.

Keywords

» Artificial intelligence  » Distillation  » Knowledge distillation  » Prompt  » Student model  » Teacher model