Loading Now

Summary of Mmmt-if: a Challenging Multimodal Multi-turn Instruction Following Benchmark, by Elliot L. Epstein and Kaisheng Yao and Jing Li and Xinyi Bai and Hamid Palangi


MMMT-IF: A Challenging Multimodal Multi-Turn Instruction Following Benchmark

by Elliot L. Epstein, Kaisheng Yao, Jing Li, Xinyi Bai, Hamid Palangi

First submitted to arxiv on: 26 Sep 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Computation and Language (cs.CL); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed MMMT-IF evaluation set challenges models to retrieve instructions dispersed across long dialogues and reason under instruction constraints. To measure this, a new metric called Programmatic Instruction Following (PIF) is introduced, which calculates the fraction of correctly followed instructions while performing a reasoning task. The PIF metric aligns with human instruction following ratings, showing 60 percent correlation. Experiments show that models like Gemini 1.5 Pro, GPT-4o, and Claude 3.5 Sonnet have a decreasing PIF score across turns, with an average drop from 0.81 at turn 1 to 0.64 at turn 20. Additionally, when each response is repeated four times (PIF-4-4), the models successfully follow all instructions only 11% of the time.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper proposes a new evaluation set called MMMT-IF that tests multimodal, multi-turn dialogue models’ ability to follow instructions. This challenge involves retrieving instructions from long dialogues and applying them correctly. The authors also introduce a new metric called Programmatic Instruction Following (PIF) to measure this ability. They find that even advanced language models like Gemini 1.5 Pro, GPT-4o, and Claude 3.5 Sonnet struggle to follow instructions consistently.

Keywords

» Artificial intelligence  » Claude  » Gemini  » Gpt