Loading Now

Summary of Two Effects, One Trigger: on the Modality Gap, Object Bias, and Information Imbalance in Contrastive Vision-language Models, by Simon Schrodi et al.


Two Effects, One Trigger: On the Modality Gap, Object Bias, and Information Imbalance in Contrastive Vision-Language Models

by Simon Schrodi, David T. Hoffmann, Max Argus, Volker Fischer, Thomas Brox

First submitted to arxiv on: 11 Apr 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Contrastive vision-language models (VLMs), like CLIP, have gained popularity for their versatility in various downstream tasks. Despite successes in zero-shot object recognition, they surprisingly struggle with attribute recognition. Previous research attributed this to the modality gap and bias towards objects over attributes. This analysis paper investigates these phenomena thoroughly. Evaluating off-the-shelf VLMs reveals that while the gap’s influence is often overshadowed by other factors, closing it leads to improvements. Additionally, only a few embedding dimensions drive the gap, and the spaces are differently organized. To study object bias cleanly, a definition and measure were introduced, showing that object bias does not directly impact performance on attributes. However, why do both phenomena emerge? Experiments controlling shared information between modalities revealed that an information imbalance between images and captions drives the modality gap and object bias, unveiling a connection with entropy of logits.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research paper looks at how well computer models can understand pictures and words. These models are good at recognizing objects in pictures, but not so good at identifying other things like colors or shapes. The scientists behind this study think that there’s a gap between the way pictures and words are represented inside these models, which makes it harder for them to understand certain things. They also found that the models tend to focus on objects more than other things. To figure out why this happens, they did some experiments where they controlled how much information was shared between pictures and words. What they found is that there’s a problem with the way pictures and words are connected inside these models, which makes it hard for them to understand certain things.

Keywords

» Artificial intelligence  » Embedding  » Logits  » Zero shot