Loading Now

Summary of Caption-driven Explorations: Aligning Image and Text Embeddings Through Human-inspired Foveated Vision, by Dario Zanca et al.


Caption-Driven Explorations: Aligning Image and Text Embeddings through Human-Inspired Foveated Vision

by Dario Zanca, Andrea Zugarini, Simon Dietz, Thomas R. Altstidl, Mark A. Turban Ndjeuha, Leo Schwinn, Bjoern Eskofier

First submitted to arxiv on: 19 Aug 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces a new dataset, CapMIT1003, which contains captions and click-contingent image explorations to study human attention during the captioning task. The authors also propose a zero-shot method, NevaClip, that combines CLIP models with NeVA algorithms to predict visual scanpaths. NevaClip generates fixations to align representations of foveated visual stimuli and captions, outperforming existing human attention models in plausibility for captioning and free-viewing tasks. This research advances our understanding of human attention and improves scanpath prediction models.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us understand how people look at pictures when trying to describe what’s happening in them. They created a big collection of images with captions, and then looked at how people explored the pictures as they wrote their descriptions. The authors also made a new way to predict where people will look next based on words and pictures. This new method works better than old methods for tasks like captioning and just looking at pictures.

Keywords

* Artificial intelligence  * Attention  * Zero shot