Summary of Bayesian-guided Label Mapping For Visual Reprogramming, by Chengyi Cai et al.
Bayesian-guided Label Mapping for Visual Reprogramming
by Chengyi Cai, Zesheng Ye, Lei Feng, Jianzhong Qi, Feng Liu
First submitted to arxiv on: 31 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper introduces Bayesian-guided Label Mapping (BLM), a novel method for visual reprogramming (VR) that adapts the output interface of pretrained vision models to solve downstream tasks. The authors reveal that traditional label mapping methods, which rely on one-to-one correspondences between labels, may overlook complex relationships between pretrained and downstream labels. BLM constructs an iteratively-updated probabilistic label mapping matrix using Bayesian conditional probability, considering the joint distribution of downstream labels and predicted model outputs. Experiments demonstrate the superior performance of BLM over existing methods on both vision models (ResNeXt) and vision-language models (CLIP). The paper’s findings offer a probabilistic lens for understanding VR effectiveness. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us understand how to better use trained computer vision models for new tasks. Right now, we can teach these models what to do by changing the way they get their information or what they produce. But this method might not work well if the labels (the answers) are very different from what the model was originally taught. To solve this problem, the authors created a new way of mapping old labels to new labels that takes into account how the model actually works. They tested this new method on several types of models and found it performed better than existing methods. |
Keywords
» Artificial intelligence » Probability