Loading Now

Summary of Historical Test-time Prompt Tuning For Vision Foundation Models, by Jingyi Zhang et al.


Historical Test-time Prompt Tuning for Vision Foundation Models

by Jingyi Zhang, Jiaxing Huang, Xiaoqin Zhang, Ling Shao, Shijian Lu

First submitted to arxiv on: 27 Oct 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers aim to improve the performance of test-time prompt tuning, a technique that learns effective prompts during inference without requiring task-specific annotations. While previous approaches have shown potential, their performance often degrades over time when prompts are updated with new data. To address this issue, the authors propose HisTPT, a Historical Test-time Prompt Tuning technique that memorizes useful knowledge from previously learned test samples and enables robust prompt tuning. HisTPT introduces three types of knowledge banks and an adaptive retrieval mechanism to regularize predictions. The approach is tested on various visual recognition tasks and domains, achieving superior performance consistently.
Low GrooveSquid.com (original content) Low Difficulty Summary
Test-time prompt tuning has shown great potential in learning effective prompts without requiring task-specific annotations. However, its performance often degrades over time when prompts are continuously updated with new data. To address this issue, researchers propose HisTPT, a technique that memorizes useful knowledge from previously learned test samples and enables robust prompt tuning. This approach is tested on various visual recognition tasks and domains, achieving superior performance consistently.

Keywords

» Artificial intelligence  » Inference  » Prompt