Summary of Progressive Semantic-guided Vision Transformer For Zero-shot Learning, by Shiming Chen et al.
Progressive Semantic-Guided Vision Transformer for Zero-Shot Learning
by Shiming Chen, Wenjin Hou, Salman Khan, Fahad Shahbaz Khan
First submitted to arxiv on: 11 Apr 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel zero-shot learning (ZSL) approach, dubbed ZSLViT, is proposed to tackle the limitations of existing ZSL methods. These methods rely on pre-trained networks to extract visual features, but fail to learn matched visual-semantic correspondences. ZSLViT addresses this issue by introducing semantic-embedded token learning and semantic-guided token attention to discover semantic-related visual representations explicitly. The model then fuses low semantic-visual correspondence tokens to discard semantic-unrelated information for visual enhancement. This progressive learning process enables accurate visual-semantic interactions in ZSL. Empirically, ZSLViT achieves significant performance gains on three benchmark datasets: CUB, SUN, and AWA2. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Researchers have been working on a way to teach computers to recognize things they’ve never seen before. This is called zero-shot learning (ZSL). The problem with current methods is that they don’t really understand what the pictures are showing them. To fix this, scientists came up with a new approach called ZSLViT. It uses special techniques to figure out which parts of the picture are important and which aren’t. This helps it learn more accurately about things it’s never seen before. They tested their method on three different sets of pictures and found that it did much better than other methods. |
Keywords
» Artificial intelligence » Attention » Token » Zero shot