Loading Now

Summary of Do Pre-trained Vision-language Models Encode Object States?, by Kaleb Newman et al.


Do Pre-trained Vision-Language Models Encode Object States?

by Kaleb Newman, Shijie Wang, Yuan Zang, David Heffren, Chen Sun

First submitted to arxiv on: 16 Sep 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates whether vision-language models (VLMs) can learn to encode object states from web-scale data. We curate a dataset called ChangeIt-Frames for object state recognition and evaluate nine open-source VLMs, including those trained with contrastive and generative objectives. While these models excel at recognizing objects, they struggle to accurately identify the physical states of those objects. Our experiments reveal three areas where VLMs can be improved: quality of object localization, binding concepts to objects, and learning discriminative visual and language encoders on object states. We release our dataset and code for future research.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine trying to understand a video or movie without knowing what’s happening at each moment. This paper explores whether special computer models called vision-language models (VLMs) can learn to recognize the different stages of objects, like an apple changing from whole to sliced. The researchers tested many of these VLMs on a new dataset they created and found that while they’re great at recognizing objects, they often get confused about what’s happening in each moment. To improve this, the team identified three areas where the models can be better: getting accurate object locations, linking words to objects, and creating better visual and language connections.

Keywords

» Artificial intelligence