Loading Now

Summary of Membership Inference Attacks Against Large Vision-language Models, by Zhan Li et al.


Membership Inference Attacks against Large Vision-Language Models

by Zhan Li, Yongtao Wu, Yihang Chen, Francesco Tonin, Elias Abad Rocamora, Volkan Cevher

First submitted to arxiv on: 5 Nov 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Cryptography and Security (cs.CR); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel study introduces a benchmark for detecting sensitive data in large vision-language models (VLLMs) and proposes a pipeline for token-level image detection. The paper addresses the lack of standardized datasets and methodologies by providing a membership inference attack (MIA) tailored for various VLLMs. It also presents a new metric, MaxRényi-K%, based on the confidence of model output, applicable to both text and image data. This work aims to deepen understanding and methodology of MIAs in VLLMs.
Low GrooveSquid.com (original content) Low Difficulty Summary
A team of researchers has created a way to find private information in big models that can understand pictures and words. These models are very good at doing things like recognizing objects in photos or understanding what people are saying. But they were trained using lots of data, some of which might be personal or sensitive. The scientists want to make sure that the models don’t include this kind of information. They did it by making a special test and coming up with new ways to measure how well the models do.

Keywords

* Artificial intelligence  * Inference  * Token