Summary of B-avibench: Towards Evaluating the Robustness Of Large Vision-language Model on Black-box Adversarial Visual-instructions, by Hao Zhang et al.
B-AVIBench: Towards Evaluating the Robustness of Large Vision-Language Model on Black-box Adversarial Visual-Instructions
by Hao Zhang, Wenqi Shao, Hong Liu, Yongqiang Ma, Ping Luo, Yu Qiao, Nanning Zheng, Kaipeng Zhang
First submitted to arxiv on: 14 Mar 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper introduces B-AVIBench, a framework designed to evaluate the robustness of Large Vision-Language Models (LVLMs) against various Black-box Adversarial Visual-Instructions (B-AVIs). LVLMs have made significant progress in responding to visual instructions, but these instructions are susceptible to intentional and inadvertent attacks. To address this critical gap, B-AVIBench generates 316K B-AVIs encompassing five categories of multimodal capabilities (ten tasks) and content bias. The framework evaluates the performance of 14 open-source LVLMs, highlighting vulnerabilities and biases in advanced closed-source models like GeminiProVision and GPT-4V. This underscores the importance of enhancing robustness, security, and fairness in LVLMs. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making sure that computers can understand and follow instructions from users. The problem is that these instructions can be tricky or even mean-spirited, so we need to make sure that computers are not fooled. To do this, the researchers created a special tool called B-AVIBench that tests how well different computer models can handle tricky instructions. They found out that some of these computer models have built-in biases and are not very good at understanding certain kinds of instructions. This is important because it means we need to make sure that computers are designed to be fair and trustworthy. |
Keywords
» Artificial intelligence » Gpt