Summary of Semantic Token Reweighting For Interpretable and Controllable Text Embeddings in Clip, by Eunji Kim et al.
Semantic Token Reweighting for Interpretable and Controllable Text Embeddings in CLIP
by Eunji Kim, Kyuhong Shim, Simyung Chang, Sungroh Yoon
First submitted to arxiv on: 11 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computation and Language (cs.CL); Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed framework, Semantic Token Reweighting to build Interpretable text embeddings (SToRI), refines the text encoding process in Vision-Language Models (VLMs) like CLIP by differentially weighting semantic elements based on contextual importance. This allows for finer control over emphasis responsive to data-driven insights and user preferences, enabling interpretative analysis of vision tasks through natural language. The SToRI framework is demonstrated to be effective through comprehensive experiments on few-shot image classification and image retrieval tailored to user preferences. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary SToRI is a new way to make text embeddings more useful by making them understand the importance of different words in a sentence. This helps when analyzing images with natural language, like describing what’s happening in a picture. The SToRI method works by giving more or less weight to certain words depending on their context, so it can help focus on the most important parts of an image. |
Keywords
» Artificial intelligence » Few shot » Image classification » Token