Loading Now

Summary of Detecting and Mitigating Hallucination in Large Vision Language Models Via Fine-grained Ai Feedback, by Wenyi Xiao et al.


Detecting and Mitigating Hallucination in Large Vision Language Models via Fine-Grained AI Feedback

by Wenyi Xiao, Ziwei Huang, Leilei Gan, Wanggui He, Haoyuan Li, Zhelun Yu, Fangxun Shu, Hao Jiang, Linchao Zhu

First submitted to arxiv on: 22 Apr 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes novel approaches to detect and mitigate hallucinations in Large Vision Language Models (LVLMs). Hallucinations occur when generated texts do not align with given contexts, limiting LVLM usage. Most existing work detects hallucination at coarse-grained levels or requires expensive annotations. The authors train a sentence-level hallucination detection model using proprietary models, which can detect primary hallucination types (object, attribute, and relationship). They then propose a pipeline to automatically construct preference datasets for training hallucination mitigating models. Additionally, the paper introduces Hallucination Severity-Aware Direct Preference Optimization (HSA-DPO) to mitigate hallucinations by incorporating severity into preference learning. The authors demonstrate the effectiveness of their method through extensive experiments.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps solve a big problem with language models that can generate fake text. Right now, these models often produce text that doesn’t make sense in the context it’s being used. To fix this, researchers are working on ways to detect when the model is making mistakes and then correct those mistakes. The authors of this paper propose two new approaches: one to detect when the model is generating fake text, and another to correct those mistakes by learning from what the model does right. They tested their ideas and showed that they work well.

Keywords

» Artificial intelligence  » Hallucination  » Optimization