Loading Now

Summary of Mllm Can See? Dynamic Correction Decoding For Hallucination Mitigation, by Chenxi Wang et al.


MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation

by Chenxi Wang, Xiang Chen, Ningyu Zhang, Bozhong Tian, Haoming Xu, Shumin Deng, Huajun Chen

First submitted to arxiv on: 15 Oct 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Multimedia (cs.MM)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper delves into the phenomenon of hallucinations in Multimodal Large Language Models (MLLMs), where they incorrectly generate objects despite recognizing them in preceding layers. The authors propose a novel dynamic correction decoding method, DeCo, which adaptively selects preceding layers and integrates knowledge to adjust output logits. DeCo is model-agnostic and can be applied to various classic decoding strategies. Evaluations on widely-used benchmarks show that DeCo significantly reduces hallucination rates compared to baselines, highlighting its potential to mitigate hallucinations. The authors’ findings suggest that strong language model priors may suppress visual information, leading to hallucinations.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper studies a problem with big artificial intelligence models called Multimodal Large Language Models (MLLMs). These models sometimes make mistakes by generating things that aren’t actually there. The researchers found out why this happens and created a new way to fix it, called DeCo. DeCo helps the model be more accurate by choosing the right information from earlier layers. This method works with different types of AI models and can reduce mistakes by a lot.

Keywords

» Artificial intelligence  » Hallucination  » Language model  » Logits