Summary of Tree Of Attributes Prompt Learning For Vision-language Models, by Tong Ding et al.
Tree of Attributes Prompt Learning for Vision-Language Models
by Tong Ding, Wanhua Li, Zhongqi Miao, Hanspeter Pfister
First submitted to arxiv on: 15 Oct 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Prompt learning has been effective in adapting vision language models for downstream tasks. However, existing methods usually append learnable prompt tokens solely with the category names to obtain textual features, failing to fully leverage the rich context indicated in the category name. To address this issue, we propose Tree of Attributes Prompt (TAP) learning, which first instructs LLMs to generate a tree of attributes with a “concept – attribute – description” structure for each category, and then learns the hierarchy with vision and text prompt tokens. Our approach distills structured knowledge graphs associated with class names from LLMs, unlike existing methods that merely augment category names with unstructured descriptions. Additionally, we introduce text and vision prompts designed to explicitly learn corresponding visual attributes, serving as domain experts. We also propose a vision-conditional pooling module to extract instance-specific text features. Extensive experimental results demonstrate our approach outperforms state-of-the-art methods on zero-shot base-to-novel generalization, cross-dataset transfer, and few-shot classification across 11 diverse datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making computers better at learning new things from pictures. Right now, computers are not very good at this because they don’t understand the details in the picture. The researchers propose a new way to teach computers using a special kind of tree-like structure that helps them learn more about what’s in the picture. They tested their method on many different datasets and found that it worked much better than existing methods. |
Keywords
» Artificial intelligence » Classification » Few shot » Generalization » Prompt » Zero shot