Loading Now

Summary of Evaluating Image Hallucination in Text-to-image Generation with Question-answering, by Youngsun Lim et al.


Evaluating Image Hallucination in Text-to-Image Generation with Question-Answering

by Youngsun Lim, Hojun Choi, Hyunjung Shim

First submitted to arxiv on: 19 Sep 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Despite the success of text-to-image generation models, existing studies overlook the issue of image hallucination, where generated images fail to accurately depict factual content. To address this, we introduce I-HallA (Image Hallucination evaluation with Question Answering), a novel automated evaluation metric that measures factuality through visual question answering (VQA). We also curate a benchmark dataset, I-HallA v1.0, comprising 1.2K diverse image-text pairs across nine categories with 1,000 rigorously curated questions. Our pipeline generates high-quality question-answer pairs using GPT-4 Omni-based agents and human judgments to ensure accuracy. We evaluate five TTI models using I-HallA, revealing that state-of-the-art models often fail to accurately convey factual information. The reliability of our metric is validated with a strong Spearman correlation (ρ=0.95) with human judgments. Our benchmark dataset and metric can serve as a foundation for developing factually accurate TTI generation models.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how well text-to-image generation models do at creating images that are accurate and true to the information they’re based on. They introduce a new way to evaluate these models called I-HallA, which asks questions about the images generated by the models. The team also created a big dataset of 1,200 image-text pairs across different categories with 1,000 questions. They tested five state-of-the-art models using this method and found that they often don’t accurately convey factual information. This paper can help us develop better text-to-image generation models that create accurate images.

Keywords

» Artificial intelligence  » Gpt  » Hallucination  » Image generation  » Question answering