Loading Now

Summary of Worse Than Random? An Embarrassingly Simple Probing Evaluation Of Large Multimodal Models in Medical Vqa, by Qianqi Yan et al.


Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA

by Qianqi Yan, Xuehai He, Xiang Yue, Xin Eric Wang

First submitted to arxiv on: 30 May 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Medium Difficulty summary: Large Multimodal Models (LMMs) have achieved high accuracy on existing benchmarks in Medical Visual Question Answering (Med-VQA). However, their reliability under robust evaluation is questionable. This study reveals that state-of-the-art models perform worse than random guessing when subjected to simple probing evaluation on medical diagnosis questions. To address this critical evaluation problem, the authors introduce the Probing Evaluation for Medical Diagnosis (ProbMed) dataset, which features pairing original questions with negation questions and procedural diagnosis. The evaluation shows that top-performing models like GPT-4o, GPT-4V, and Gemini Pro perform worse than random guessing on specialized diagnostic questions, indicating significant limitations in handling fine-grained medical inquiries. Models like LLaVA-Med struggle even with more general questions, while results from CheXagent demonstrate the transferability of expertise across different modalities of the same organ. This study underscores the urgent need for robust evaluation to ensure the reliability of LMMs in critical fields like medical diagnosis.
Low GrooveSquid.com (original content) Low Difficulty Summary
Low Difficulty summary: Researchers are working on machines that can answer medical questions based on pictures. They’ve found that these machines, called Large Multimodal Models (LMMs), don’t do well when asked tough medical questions. In fact, they often make mistakes or guess randomly. To fix this problem, the researchers created a new way to test these machines and found that even the best ones struggle with difficult medical questions. They also showed that these machines are better at answering some types of questions than others. Overall, this study shows that we need to improve how we test these machines so they can be trusted in important situations like medical diagnosis.

Keywords

* Artificial intelligence  * Gemini  * Gpt  * Question answering  * Transferability