Summary of Cosmos: Cross-modality Self-distillation For Vision Language Pre-training, by Sanghwan Kim et al.
COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training
by Sanghwan Kim, Rui Xiao, Mariana-Iuliana Georgescu, Stephan Alaniz, Zeynep Akata
First submitted to arxiv on: 2 Dec 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Vision-Language Models (VLMs) have made significant advancements in vision and language tasks using contrastive loss. However, this approach neglects important information outside the foreground objects, limiting their effectiveness. To address these limitations, we propose COSMOS: a self-supervised learning framework integrating text-cropping strategy and cross-attention module for vision-language pre-training. COSMOS creates global and local views of images and texts, enabling comprehensive cross-modal representations optimized via cross-modality self-distillation loss. Our results show that COSMOS consistently outperforms strong baselines on zero-shot downstream tasks like retrieval, classification, and semantic segmentation. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making computers better at understanding pictures and words together. Right now, these computer models are good at focusing on the main things in a picture, but they’re not very good at understanding the other details. To fix this, we created a new way to train these models called COSMOS. It helps them learn more about the whole picture, not just the main part. We tested it and found that it does better than other methods on lots of different tasks. |
Keywords
» Artificial intelligence » Classification » Contrastive loss » Cross attention » Distillation » Self supervised » Semantic segmentation » Zero shot