Loading Now

Summary of Decoop: Robust Prompt Tuning with Out-of-distribution Detection, by Zhi Zhou et al.


DeCoOp: Robust Prompt Tuning with Out-of-Distribution Detection

by Zhi Zhou, Ming Yang, Jiang-Xin Shi, Lan-Zhe Guo, Yu-Feng Li

First submitted to arxiv on: 1 Jun 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper presents Open-world Prompt Tuning (OPT), a problem setting for enhancing vision-language models’ zero-shot capabilities. Current prompt tuning methods evaluate performance separately on base and new classes, which is impractical for real-world applications. The authors propose the Decomposed Prompt Tuning framework (DePT) to incorporate out-of-distribution detection into prompt tuning, improving base-to-new discriminability. They also introduce Decomposed Context Optimization (DeCoOp), a novel approach that introduces new-class detectors and sub-classifiers to enhance discriminability. Experimental results on 11 benchmark datasets demonstrate the effectiveness of DePT and show that DeCoOp outperforms current state-of-the-art methods, achieving an average accuracy improvement of 2%.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper improves how we train computer models that can understand images and text. Current models are good at doing this for certain tasks, but they’re not very good at learning new things. The authors want to change this by creating a new way to fine-tune the models using prompts (short pieces of text). They call this “Open-world Prompt Tuning.” To do this, they’ve developed two new approaches: Decomposed Prompt Tuning and Decomposed Context Optimization. These methods help the model learn more about what’s in an image and what it means.

Keywords

» Artificial intelligence  » Optimization  » Prompt  » Zero shot