Summary of Modality-inconsistent Continual Learning Of Multimodal Large Language Models, by Weiguo Pian et al.
Modality-Inconsistent Continual Learning of Multimodal Large Language Models
by Weiguo Pian, Shijian Deng, Shentong Mo, Yunhui Guo, Yapeng Tian
First submitted to arxiv on: 17 Dec 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Computer Vision and Pattern Recognition (cs.CV); Sound (cs.SD); Audio and Speech Processing (eess.AS)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper introduces Modality-Inconsistent Continual Learning (MICL), a new scenario for Multimodal Large Language Models (MLLMs) that involves tasks with inconsistent modalities (image, audio, or video) and varying task types (captioning or question-answering). MICL combines modality and task type shifts, driving catastrophic forgetting. To address this, the authors propose MoInCL, which employs a Pseudo Targets Generation Module to mitigate forgetting caused by task type shifts in previously seen modalities. Additionally, Instruction-based Knowledge Distillation is used to preserve the model’s ability to handle previously learned modalities when new ones are introduced. The paper benchmarks MICL using six tasks and experiments demonstrate the effectiveness of MoInCL, outperforming representative and state-of-the-art continual learning baselines. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research introduces a new way for computers to learn from different types of data (images, audio, or videos) and perform various tasks. The model needs to adapt when it’s shown new types of data and asked to do different things. To solve this problem, the authors developed a system called MoInCL that helps the model remember what it learned before, even when it’s shown new data. They tested their system on six different tasks and found that it performed better than other systems designed for similar problems. |
Keywords
» Artificial intelligence » Continual learning » Knowledge distillation » Question answering