Summary of Concept-skill Transferability-based Data Selection For Large Vision-language Models, by Jaewoo Lee et al.
Concept-skill Transferability-based Data Selection for Large Vision-Language Models
by Jaewoo Lee, Boyang Li, Sung Ju Hwang
First submitted to arxiv on: 16 Jun 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper introduces COINCIDE, a scalable and effective technique for selecting visual instruction tuning data to finetune Large Vision-Language Models (LVLMs) efficiently. The approach uses a small model as a reference to cluster training data based on internal activations, identifying diverse concept-skill compositions needed by the target LVLM. By sampling data from these clusters considering density and transferability, COINCIDE ensures diversity for generalization. Experimental results demonstrate superior performance and efficiency compared to 8 strong baselines on two datasets: LLaVA-1.5 and Vision-Flan. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary COINCIDE is a way to help big models learn new things quickly and efficiently. It uses a smaller model as a guide to pick the right training data, making sure it’s diverse and useful for the bigger model. This helps the bigger model generalize well across many tasks. The paper shows that COINCIDE performs better than other methods on two different datasets. |
Keywords
» Artificial intelligence » Generalization » Instruction tuning » Transferability