Loading Now

Summary of Cmmmu: a Chinese Massive Multi-discipline Multimodal Understanding Benchmark, by Ge Zhang et al.


CMMMU: A Chinese Massive Multi-discipline Multimodal Understanding Benchmark

by Ge Zhang, Xinrun Du, Bei Chen, Yiming Liang, Tongxu Luo, Tianyu Zheng, Kang Zhu, Yuyang Cheng, Chunpu Xu, Shuyue Guo, Haoran Zhang, Xingwei Qu, Junjie Wang, Ruibin Yuan, Yizhi Li, Zekun Wang, Yudong Liu, Yu-Hsuan Tsai, Fengji Zhang, Chenghua Lin, Wenhao Huang, Jie Fu

First submitted to arxiv on: 22 Jan 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper introduces CMMMU, a new benchmark designed to evaluate large multimodal models (LMMs) on tasks requiring college-level subject knowledge and deliberate reasoning in Chinese. Inspired by MMMU, CMMMU consists of 12k manually collected multimodal questions from college exams, quizzes, and textbooks across six core disciplines. These questions span 30 subjects and comprise 39 highly heterogeneous image types. The authors evaluate 11 open-source LMMs and one proprietary GPT-4V (vision), finding that even GPT-4V only achieves accuracies of 42%, indicating a large space for improvement. CMMMU aims to promote the democratization of LMMs by providing diverse language contexts.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper creates a new way to test artificial intelligence models on Chinese questions. The questions are from college exams and cover many subjects like art, science, and business. The authors want to see if AI can understand these questions as well as humans do. They tested 12 different AI models and found that even the best one got only about 42% of the answers correct. This shows there’s still a lot of room for improvement. The goal is to make AI smarter and more helpful for people.

Keywords

» Artificial intelligence  » Gpt