Loading Now

Summary of Vlbiasbench: a Comprehensive Benchmark For Evaluating Bias in Large Vision-language Model, by Sibo Wang et al.


VLBiasBench: A Comprehensive Benchmark for Evaluating Bias in Large Vision-Language Model

by Sibo Wang, Xiangkui Cao, Jie Zhang, Zheng Yuan, Shiguang Shan, Xilin Chen, Wen Gao

First submitted to arxiv on: 20 Jun 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces VLBiasBench, a comprehensive benchmark designed to evaluate biases in Large Vision-Language Models (LVLMs). This is a crucial step towards achieving general artificial intelligence. The existing benchmarks have limitations, including limited data scale, single questioning format, and narrow sources of bias. To address this problem, the authors generate a large-scale dataset using Stable Diffusion XL model, which includes 46,848 high-quality images combined with various questions to create 128,342 samples. These questions are divided into open-ended and close-ended types to ensure thorough consideration of bias sources. The authors conduct extensive evaluations on 15 open-source models as well as two advanced closed-source models, yielding new insights into the biases present in these models.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper introduces a new benchmark called VLBiasBench that helps evaluate biases in Large Vision-Language Models (LVLMs). LVLMs are important for artificial intelligence. Right now, there’s no good way to test how biased they are. The authors make a big dataset with lots of images and questions about those images. They use this data to see if 17 different models are biased or not.

Keywords

* Artificial intelligence  * Diffusion