Loading Now

Summary of Mmdocbench: Benchmarking Large Vision-language Models For Fine-grained Visual Document Understanding, by Fengbin Zhu et al.


MMDocBench: Benchmarking Large Vision-Language Models for Fine-Grained Visual Document Understanding

by Fengbin Zhu, Ziyang Liu, Xiang Yao Ng, Haohui Wu, Wenjie Wang, Fuli Feng, Chao Wang, Huanbo Luan, Tat Seng Chua

First submitted to arxiv on: 25 Oct 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Large Vision-Language Models (LVLMs) have revolutionized various vision-language tasks, but their fine-grained visual understanding capabilities remain inadequately evaluated. Traditional benchmarks often combine limited fine-grained samples with other data or focus on object-level assessments in natural images. To comprehensively assess LVLMs’ fine-grained visual perception and reasoning abilities, we introduce MMDocBench, a novel benchmark featuring document images with multi-granularity and multi-modal information. This benchmark comprises 15 main tasks with 4,338 QA pairs and 11,353 supporting regions, covering diverse document types such as research papers, receipts, financial reports, Wikipedia tables, charts, and infographics. We conduct extensive experiments using 16 advanced LVLMs, evaluating their strengths and weaknesses across different tasks and document image types.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine being able to understand what’s in a picture or a document really well. This is important for things like recognizing objects, reading text, and making sense of data. Currently, machines are not very good at doing this. To improve their abilities, researchers created a new way to test how well they can do this. They made a special set of documents with many different types of information, such as tables, charts, and text. Then, they used 16 advanced computer models to see how well they could understand these documents. This will help machines get better at understanding what’s in pictures and documents.

Keywords

» Artificial intelligence  » Multi modal