Loading Now

Summary of Overcoming the Pitfalls Of Vision-language Model Finetuning For Ood Generalization, by Yuhang Zang et al.

Overcoming the Pitfalls of Vision-Language Model Finetuning for OOD Generalization

by Yuhang Zang, Hanlin Goh, Josh Susskind, Chen Huang

First submitted to arxiv on: 29 Jan 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     text      pdf


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores the challenges of open-domain visual concept recognition by finetuning vision-language models. Existing models excel at generalization on various tasks, but struggle to recognize novel concepts due to their closed-set design. Recent prompt learning methods have shown promise in improving both in-distribution and out-of-distribution accuracy, but still face limitations. The authors demonstrate that these models tend to overfit known classes during finetuning, degrading performance on unknown classes. To address this, the paper proposes a novel approach called OGEN, which introduces a class-conditional feature generator to synthesize out-of-distribution features using only the class name of an unknown class. This synthesized knowledge helps regularize the decision boundary between in-distribution and out-of-distribution data during joint optimization. The authors also introduce an adaptive self-distillation mechanism to prevent overfitting, allowing the model to transfer knowledge between different states. Experimental results show that OGEN yields significant gains in open-domain generalization performance across various settings.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us understand how computers can recognize new things they’ve never seen before. Right now, these machines are very good at recognizing things we train them on, but struggle to recognize things we haven’t shown them. To fix this, the authors came up with a new idea called OGEN that uses class names to generate information about new classes. This helps the computer make better decisions when it sees something new. The authors also developed a way for the model to learn from itself and avoid getting too good at recognizing things we’ve shown it before. This means the model can recognize new things more accurately.