Loading Now

Summary of Seeing Is Believing: Mitigating Hallucination in Large Vision-language Models Via Clip-guided Decoding, by Ailin Deng et al.


Seeing is Believing: Mitigating Hallucination in Large Vision-Language Models via CLIP-Guided Decoding

by Ailin Deng, Zhirui Chen, Bryan Hooi

First submitted to arxiv on: 23 Feb 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG); Multimedia (cs.MM)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses the issue of object hallucinations in Large Vision-Language Models (LVLMs), where generated text contains non-existent objects. The authors first analyze sentence-level LVLM hallucination and find that CLIP similarity to the image is a stronger indicator of hallucination than token likelihoods. Building on this finding, they introduce CLIP-Guided Decoding (CGD), a training-free approach that uses CLIP to guide the model’s decoding process and enhance visual grounding with the image. CGD effectively mitigates object hallucinations across multiple LVLM families while preserving text generation utility.
Low GrooveSquid.com (original content) Low Difficulty Summary
LVLMs are special computer programs that can understand and generate text based on images. Sometimes, these models create objects that don’t exist in reality. This is a big problem because we need these models to be reliable and practical. To solve this issue, researchers analyzed how well different methods worked at identifying when object hallucination happens. They found that one method called CLIP is actually better than others at spotting this problem. So, they developed a new approach called CLIP-Guided Decoding (CGD) that uses CLIP to help the model generate more accurate text.

Keywords

* Artificial intelligence  * Grounding  * Hallucination  * Text generation  * Token