Summary of Active Data Curation Effectively Distills Large-scale Multimodal Models, by Vishaal Udandarao et al.
Active Data Curation Effectively Distills Large-Scale Multimodal Models
by Vishaal Udandarao, Nikhil Parthasarathy, Muhammad Ferjad Naeem, Talfan Evans, Samuel Albanie, Federico Tombari, Yongqin Xian, Alessio Tonioni, Olivier J. Hénaff
First submitted to arxiv on: 27 Nov 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes an alternative approach to knowledge distillation (KD), a widely used method for compressing large-scale models into smaller ones. The authors introduce active data curation as an effective way to distill contrastive multimodal pretraining, outperforming strong KD baselines across various model-, data-, and compute-configurations. The simple online batch selection method, ACID, is shown to be complementary to standard KD, allowing for the training of highly performant inference-efficient models with up to 11% fewer inference FLOPs. The authors also demonstrate that their pretraining framework, ACED, achieves state-of-the-art results across 27 zero-shot classification and retrieval tasks. Additionally, they show that ACED models yield strong vision-encoders for training generative multimodal models in the LiT-Decoder setting, outperforming larger vision encoders for image-captioning and visual question-answering tasks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us make big computers smaller and more efficient. They use a new way to teach these computers what they know, called active data curation. This method is simple and works better than some other ways that people have tried. It can even work together with another popular method called knowledge distillation. The authors also show that their approach can be used to make really good models for things like recognizing objects in pictures and answering questions about what’s in those pictures. |
Keywords
» Artificial intelligence » Classification » Decoder » Image captioning » Inference » Knowledge distillation » Pretraining » Question answering » Zero shot