Summary of Seeing Clearly by Layer Two: Enhancing Attention Heads to Alleviate Hallucination in Lvlms, By Xiaofeng Zhang et al.
Seeing Clearly by Layer Two: Enhancing Attention Heads to Alleviate Hallucination in LVLMs
by Xiaofeng Zhang, Yihao Quan, Chaochen Gu, Chen Shen, Xiaosong Yuan, Shaotian Yan, Hao Cheng, Kaijie Wu, Jieping Ye
First submitted to arxiv on: 15 Nov 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper investigates the relationship between image tokens and hallucinations in multimodal large language models (MLLMs). It reveals that most hallucinations are linked to attention sinks in the self-attention matrix of image tokens. Shallow layers exhibit dense attention sinks, while deeper layers show sparse attention sinks. The study finds that heads with high-density attention sink in the image part play a positive role in alleviating hallucinations. The authors propose a training-free method called EAH (Enhancing Attention Heads), which enhances the convergence of image tokens’ attention sinks in shallow layers by broadcasting the attention map to other heads in the layer. EAH shows significant hallucination-mitigating performance on different MLLMs and metrics. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper looks at how large language models make mistakes when dealing with images. It finds that most of these mistakes happen because the model is paying too much attention to some parts of the image. The authors want to fix this by making the model pay more attention to the important parts of the image. They came up with a new way to do this called EAH, which makes the model better at recognizing what’s in an image and not making as many mistakes. |
Keywords
» Artificial intelligence » Attention » Hallucination » Self attention