Loading Now

Summary of Prismatic Vlms: Investigating the Design Space Of Visually-conditioned Language Models, by Siddharth Karamcheti et al.


Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models

by Siddharth Karamcheti, Suraj Nair, Ashwin Balakrishna, Percy Liang, Thomas Kollar, Dorsa Sadigh

First submitted to arxiv on: 12 Feb 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates visually-conditioned language models (VLMs) and their design decisions. Despite the proliferation of new models like LLaVa, InstructBLIP, and PaLI-3, key factors affecting model performance remain unclear due to a lack of standardized evaluations. To address this, the authors compile a suite of evaluations spanning visual question answering, object localization, and challenge sets that probe hallucination properties. They then rigorously investigate VLMs along design axes such as pretrained visual representations and training from base vs. instruct-tuned language models. The study culminates in three resource contributions: a unified framework for evaluating VLMs, optimized training code, and model checkpoints, including a family of VLMs that outperform the state-of-the-art.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about special computer programs called visually-conditioned language models (VLMs). These programs are good at understanding pictures and talking about what’s in them. There are many different ways to design these programs, but nobody really knows which ones work best. The authors of this paper want to figure out what makes some VLMs better than others. They’re doing this by looking at how well different models do on certain tasks, like answering questions about pictures or finding specific objects in a picture. They’re also providing tools and resources for other researchers to use and improve upon their work.

Keywords

* Artificial intelligence  * Hallucination  * Question answering