Loading Now

Summary of Mllmreid: Multimodal Large Language Model-based Person Re-identification, by Shan Yang et al.


MLLMReID: Multimodal Large Language Model-based Person Re-identification

by Shan Yang, Yongfei Zhang

First submitted to arxiv on: 24 Jan 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper explores adapting multimodal large language models (MLLMs) for the task of ReID (Reidentification), which aims to identify and track individuals across images and videos. To address two apparent issues in fine-tuning MLLMs for ReID, the authors propose a method called MLLMReID. The approach leverages the ability of LLMs to continue writing by introducing Common Instruction, a simple way to design instructions for ReID without requiring diverse and complex instruction designs. Additionally, the paper proposes a multi-task learning-based synchronization module to ensure that the visual encoder of the MLLM is trained synchronously with the ReID task. The experimental results demonstrate the superiority of this method.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about using special kinds of AI models called large language models (LLMs) for something called ReID, which means identifying people in pictures and videos. Right now, these LLMs are really good at doing lots of things, but they haven’t been tested much for ReID yet. The authors want to make it easier to use these models for ReID by giving them simple instructions instead of complicated ones. They also found a way to train the model’s “eyes” (what it looks at) and its “brain” (what it does with what it sees) at the same time, so they work better together.

Keywords

» Artificial intelligence  » Encoder  » Fine tuning  » Multi task