Loading Now

Summary of Multi-modal, Multi-task, Multi-criteria Automatic Evaluation with Vision Language Models, by Masanari Ohi et al.


Multi-modal, Multi-task, Multi-criteria Automatic Evaluation with Vision Language Models

by Masanari Ohi, Masahiro Kaneko, Naoaki Okazaki, Nakamasa Inoue

First submitted to arxiv on: 19 Dec 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed HarmonicEval metric provides a comprehensive evaluation of text generated by vision-language models (VLMs) in various multi-modal tasks. This reference-free approach aggregates criterion-wise scores to produce an overall score, allowing it to adapt to different tasks and scenarios. The MMHE dataset contains 18,000 expert human judgments across four multi-modal tasks, which are used to demonstrate the effectiveness of HarmonicEval in achieving higher correlations with human judgments than conventional metrics.
Low GrooveSquid.com (original content) Low Difficulty Summary
Vision-language models have shown impressive abilities in various tasks, but current evaluation methods focus on a single task. A new metric called HarmonicEval is proposed to address this limitation. This metric aggregates criterion-wise scores to produce an overall score. To test this metric, a dataset called MMHE was created, which contains expert human judgments across four multi-modal tasks.

Keywords

» Artificial intelligence  » Multi modal