Loading Now

Summary of Mossbench: Is Your Multimodal Language Model Oversensitive to Safe Queries?, by Xirui Li et al.


MOSSBench: Is Your Multimodal Language Model Oversensitive to Safe Queries?

by Xirui Li, Hengguang Zhou, Ruochen Wang, Tianyi Zhou, Minhao Cheng, Cho-Jui Hsieh

First submitted to arxiv on: 22 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the cognitive biases exhibited by advanced Multimodal Large Language Models (MLLMs). While designed to respond safely, MLLMs sometimes reject harmless queries in the presence of specific visual stimuli. The authors identify three types of stimuli that trigger oversensitivity: Exaggerated Risk, Negated Harm, and Counterintuitive Interpretation. To evaluate MLLMs’ oversensitivity, they propose the Multimodal OverSenSitivity Benchmark (MOSSBench), comprising 300 manually collected benign multimodal queries. Empirical studies using MOSSBench on 20 MLLMs reveal that oversensitivity is prevalent among state-of-the-art models, with refusal rates reaching up to 76% for harmless queries. The findings highlight the need for refined safety mechanisms that balance caution with contextually appropriate responses.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how advanced computer models called Multimodal Large Language Models (MLLMs) can be biased in their thinking. Just like humans, these models can get caught up in exaggerated or negative ideas and reject harmless information if it’s accompanied by certain visual cues. The researchers identified three types of stimuli that trigger this bias: Exaggerated Risk, Negated Harm, and Counterintuitive Interpretation. To test the models’ biases, they created a benchmark with 300 safe and normal queries. They found that many state-of-the-art MLLMs are biased in their thinking, rejecting harmless information up to 76% of the time. This shows that we need better safety measures for these models so they can make more accurate decisions.

Keywords

» Artificial intelligence