Loading Now

Summary of Unlocking the Multi-modal Potential Of Clip For Generalized Category Discovery, by Enguang Wang et al.


Unlocking the Multi-modal Potential of CLIP for Generalized Category Discovery

by Enguang Wang, Zhimao Peng, Zhengyuan Xie, Fei Yang, Xialei Liu, Ming-Ming Cheng

First submitted to arxiv on: 15 Mar 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper presents a new approach to generalized category discovery (GCD), which aims to accurately classify old categories and discover new ones in unlabelled datasets. Current methods are limited by only considering visual information, leading to poor performance on visually similar classes. To address this issue, the authors introduce text embeddings into the GCD task using a Text Embedding Synthesizer (TES). The TES leverages the property of CLIP models to generate aligned vision-language features, converting visual embeddings into tokens for the text encoder. A dual-branch framework is employed, which jointly learns and enforces consistency across different modality branches, promoting mutual enhancement and fusion of visual and text knowledge. The proposed method outperforms baseline methods by a large margin on GCD benchmarks, achieving new state-of-the-art results.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper tries to solve a problem where we have unlabelled data with old and new categories, and we want to correctly classify the old ones while also discovering the new ones. Current methods only use visual information, but they’re not very good at it when classes look similar. The authors come up with an idea to use text information too, by creating fake text embeddings for unlabelled data. They use a special algorithm called CLIP to do this. Then, they use two different paths to process both visual and text information together, which helps them learn from each other. This new method performs much better than previous ones on the same task.

Keywords

* Artificial intelligence  * Embedding  * Encoder