Loading Now

Summary of Lexicon-level Contrastive Visual-grounding Improves Language Modeling, by Chengxu Zhuang et al.


Lexicon-Level Contrastive Visual-Grounding Improves Language Modeling

by Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas

First submitted to arxiv on: 21 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents LexiContrastive Grounding (LCG), a novel approach to improve textual representations in language models by leveraging visual supervision. The authors combine next token prediction and contrastive visual grounding objectives, focusing on early-layer representations that encode lexical information. LCG outperforms standard language-only models and vision-and-language learning procedures like CLIP, GIT, Flamingo, and Vokenization across multiple benchmarks for word-learning and sentence-understanding tasks. Additionally, LCG improves perplexity by around 5% on various language modeling tasks. This work highlights the potential of incorporating visual grounding into language models, aligning with human language acquisition.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper tries to make computer language models more like humans. Humans learn from lots of different sources, including what they see. The authors want to know if they can improve their computer models by showing them pictures too. They created a new way called LexiContrastive Grounding (LCG) that combines words and images to help the model learn better. In tests, LCG did much better than other models on many tasks. This shows that incorporating visual information into language models can make them more like humans.

Keywords

* Artificial intelligence  * Grounding  * Perplexity  * Token