Summary of Cmmu: a Benchmark For Chinese Multi-modal Multi-type Question Understanding and Reasoning, by Zheqi He et al.
CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and Reasoning
by Zheqi He, Xinya Wu, Pengfei Zhou, Richeng Xuan, Guang Liu, Xi Yang, Qiannan Zhu, Hua Huang
First submitted to arxiv on: 25 Jan 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Multimedia (cs.MM)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper introduces a novel benchmark for multi-modal and multi-type question understanding and reasoning in Chinese, called CMMU. This benchmark aims to evaluate the mastery of domain-specific knowledge in MLLMs. It consists of 3,603 questions in 7 subjects, covering primary to high school levels, categorized into multiple-choice, multiple-response, and fill-in-the-blank types. The authors propose an evaluation strategy called Positional Error Variance for assessing multiple-choice questions. Seven open-source MLLMs are evaluated along with GPT4-V, Gemini-Pro, and Qwen-VL-Plus, showing that CMMU poses a significant challenge to recent MLLMs. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper creates a special test for big language models to see if they can understand and answer questions in Chinese. It’s like a super-hard quiz! The test has many different types of questions, like choosing the right answer from several options or filling in the blanks. The authors want to know how well these language models do on this test, so they try out seven of them, plus three special ones. They found that most of the language models struggled with the test, which shows how hard it is! |
Keywords
» Artificial intelligence » Gemini » Multi modal