Loading Now

Summary of Break the Visual Perception: Adversarial Attacks Targeting Encoded Visual Tokens Of Large Vision-language Models, by Yubo Wang et al.


Break the Visual Perception: Adversarial Attacks Targeting Encoded Visual Tokens of Large Vision-Language Models

by Yubo Wang, Chaohu Liu, Yanqiu Qu, Haoyu Cao, Deqiang Jiang, Linli Xu

First submitted to arxiv on: 9 Oct 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper explores the robustness of large vision-language models (LVLMs) against adversarial attacks on their visual modules. LVLMs integrate visual information into language models to enable multi-modal conversations. However, this integration introduces new challenges in terms of robustness, as attackers can craft adversarial images that mislead the model to generate incorrect answers. The proposed VT-Attack method constructs adversarial examples from multiple perspectives to disrupt feature representations and semantic properties of visual tokens output by image encoders. Extensive experiments validate the attack’s effectiveness against LVLMs using the same image encoder and generality across different tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
LVLMs are super smart computers that can understand both pictures and words. But, someone could make fake pictures that would trick the computer into saying something wrong. The researchers wanted to see if they could make those fake pictures really good at fooling the computer. They came up with a new way to make these fake pictures, called VT-Attack. This method makes lots of different fake pictures from different angles to try and confuse the computer. When they tested it, they found that it was very good at making the computer say something wrong.

Keywords

» Artificial intelligence  » Encoder  » Multi modal