Summary of Exploring Response Uncertainty in Mllms: An Empirical Evaluation Under Misleading Scenarios, by Yunkai Dang et al.
Exploring Response Uncertainty in MLLMs: An Empirical Evaluation under Misleading Scenarios
by Yunkai Dang, Mengxi Gao, Yibo Yan, Xin Zou, Yanggan Gu, Aiwei Liu, Xuming Hu
First submitted to arxiv on: 5 Nov 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed pipeline for ensuring consistency in Multimodal Large Language Models (MLLMs) involves collecting responses without misleading information and then gathering misleading ones via specific instructions. The model’s response uncertainty is measured by calculating the misleading rate, capturing shifts between correct-to-incorrect and incorrect-to-correct responses. This leads to the establishment of a Multimodal Uncertainty Benchmark (MUB) that assesses MLLMs’ vulnerability across domains using explicit and implicit misleading instructions. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary All MLLMs are highly susceptible to misleading instructions, with an average rate exceeding 86%. To enhance robustness, open-source MLLMs were fine-tuned by incorporating explicit and implicit misleading data, reducing the misleading rates significantly. The study’s code is available on GitHub. |