Loading Now

Summary of Parrot: Multilingual Visual Instruction Tuning, by Hai-long Sun et al.


Parrot: Multilingual Visual Instruction Tuning

by Hai-Long Sun, Da-Wei Zhou, Yang Li, Shiyin Lu, Chao Yi, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang, De-Chuan Zhan, Han-Jia Ye

First submitted to arxiv on: 4 Jun 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses the limitations of Multimodal Large Language Models (MLLMs) in processing non-English languages. Existing methods fine-tune vision encoders with LLMs through supervised learning, which leads to reduced performance on non-English languages due to imbalanced training datasets. The authors introduce Parrot, a novel method that utilizes textual guidance to drive visual token alignment at the language level. This approach uses Mixture-of-Experts (MoE) to select experts and convert initial visual tokens into language-specific visual tokens. The paper also proposes the Massive Multilingual Multimodal Benchmark (MMMB), which consists of 6 languages, 15 categories, and 12,000 questions. Parrot achieves state-of-the-art performance on multilingual benchmarks and excels across various multimodal tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about a way to make computers better at understanding text and images in many different languages. Right now, computers are good at doing this for English, but not as good for other languages. The authors found that the problem is with how they train the computer models. They introduced a new method called Parrot that helps the computer understand text and images in many languages better. They also created a big dataset of text and images in different languages to test their method.

Keywords

» Artificial intelligence  » Alignment  » Mixture of experts  » Supervised  » Token