Summary of Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-referenced Instruction Tuning, by Xingchen Zeng et al.
Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning
by Xingchen Zeng, Haichuan Lin, Yilin Ye, Wei Zeng
First submitted to arxiv on: 29 Jul 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper explores the application of multimodal large language models (MLLMs) in chart question answering (CQA). Current efforts focus on scaling up training datasets, but our study reveals notable gaps in existing MLLMs and CQA datasets. Specifically, data collection and synthesis prioritize volume over fine-grained visual encodings and QA tasks, resulting in unbalanced data distribution. Moreover, existing work adapts MLLMs initially designed for natural images to charts without considering unique chart characteristics like rich text elements. The proposed visualization-referenced instruction tuning approach enhances the training dataset and model development by filtering diverse data, refining and augmenting it using LLM-based generation techniques, and incorporating a mixture-of-resolution adaptation strategy. Experimental results show that our approach outperforms state-of-the-art CQA models on established benchmarks even with fewer training examples. We also contribute a dataset split as a benchmark for future research. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about using special computers called large language models to answer questions about charts and graphs. Currently, these models are trained on too much data that doesn’t include important details like what’s in the chart. The researchers came up with a new way to train the models to make them better at answering questions about charts. They tested their approach and found that it works even when they use less training data than before. This is important because it means we can use these models to answer questions about charts more accurately and efficiently. |
Keywords
» Artificial intelligence » Instruction tuning » Question answering