Loading Now

Summary of Mode: Clip Data Experts Via Clustering, by Jiawei Ma et al.


MoDE: CLIP Data Experts via Clustering

by Jiawei Ma, Po-Yao Huang, Saining Xie, Shang-Wen Li, Luke Zettlemoyer, Shih-Fu Chang, Wen-Tau Yih, Hu Xu

First submitted to arxiv on: 24 Apr 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes Mixture of Data Experts (MoDE), a system that learns to mitigate noisy supervision in contrastive language-image pretraining (CLIP) by training multiple CLIP data experts on different clusters of web-crawled data. Each expert is less sensitive to false negatives in other clusters, and their outputs are ensembled using weights determined by task metadata and cluster conditions. The authors demonstrate the effectiveness of MoDE by training four CLIP data experts on ViT-B/16, which outperforms ViT-L/14-based OpenAI CLIP and OpenCLIP on zero-shot image classification with less than 35% of the training cost. Moreover, MoDE enables asynchronous training and flexible addition of new data experts.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper helps improve the performance of contrastive language-image pretraining (CLIP) by learning to deal with noisy supervision. It does this by creating multiple teams that work on different parts of the problem, each one less affected by mistakes from other teams. This way, it’s possible to get better results without needing as much training data or computational power.

Keywords

» Artificial intelligence  » Image classification  » Pretraining  » Vit  » Zero shot