Summary of Mantis: Interleaved Multi-image Instruction Tuning, by Dongfu Jiang et al.
MANTIS: Interleaved Multi-Image Instruction Tuning
by Dongfu Jiang, Xuan He, Huaye Zeng, Cong Wei, Max Ku, Qian Liu, Wenhu Chen
First submitted to arxiv on: 2 May 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed method trains strong multimodal models (Mantises) using instruction tuning with academic-level resources, rather than relying on massive pre-training datasets like OpenFlamingo, Emu2, and Idefics. The approach constructs a dataset of 721K multi-image instructions to train Mantis models that excel in tasks such as co-reference, comparison, reasoning, and temporal understanding. Evaluation on 8 multi-image benchmarks and 6 single-image benchmarks shows that Mantis-Idefics2 achieves state-of-the-art (SoTA) results on all multi-image benchmarks and outperforms the strongest baseline, Idefics2-8B, by an average of 13 absolute points. Notably, Idefics2-8B was pre-trained on a dataset 200x larger than Mantis-Instruct. The study demonstrates that low-cost instruction tuning can achieve equivalent or better results compared to massive pre-training datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper trains special machines called multimodal models (Mantises) using instructions and academic resources. Instead of relying on huge amounts of data, they focus on teaching the machines specific skills. They create a dataset with 721,000 instructions that teach the machines to do things like understand co-references, compare ideas, reason, and understand time. The results show that these trained models can achieve great scores on various tasks, outperforming other approaches even though they didn’t use as much data. |
Keywords
» Artificial intelligence » Instruction tuning