Loading Now

Summary of Blackvip: Black-box Visual Prompting For Robust Transfer Learning, by Changdae Oh et al.


BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning

by Changdae Oh, Hyeji Hwang, Hee-young Lee, YongTaek Lim, Geunyoung Jung, Jiyoung Jung, Hosik Choi, Kyungwoo Song

First submitted to arxiv on: 26 Mar 2023

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, the authors tackle the problem of fine-tuning pre-trained models (PTMs) for various downstream tasks, a crucial challenge in the era of large-scale PTMs. The recent advances in parameter-efficient transfer learning (PETL) methods have shown impressive performance, but they rely on optimistic assumptions that are often not met in real-world applications. The authors propose BlackVIP, a novel approach to adapt PTMs without knowledge about their architectures and parameters. This method consists of two components: Coordinator and simultaneous perturbation stochastic approximation with gradient correction (SPSA-GC). The Coordinator designs input-dependent image-shaped visual prompts that improve few-shot adaptation and robustness on distribution/location shift. SPSA-GC efficiently estimates the gradient of a target model to update the Coordinator. Extensive experiments demonstrate that BlackVIP enables robust adaptation to diverse domains without accessing PTMs’ parameters, with minimal memory requirements.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is all about making it easier to use pre-trained models for new tasks. Right now, we can fine-tune these models, but it requires a lot of data and computing power. The authors want to change that by introducing BlackVIP, which lets us adapt the models without knowing how they work or having access to their internal workings. This is important because in real life, we often don’t have control over the pre-trained models, so this approach helps us be more flexible.

Keywords

* Artificial intelligence  * Few shot  * Fine tuning  * Parameter efficient  * Transfer learning