Summary of Backdooring Bias Into Text-to-image Models, by Ali Naseh et al.
Backdooring Bias into Text-to-Image Models
by Ali Naseh, Jaechul Roh, Eugene Bagdasaryan, Amir Houmansadr
First submitted to arxiv on: 21 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper presents a backdoor attack on text-conditional diffusion models, specifically targeting generative models that produce images based on user-provided descriptions. The authors demonstrate how an adversary can inject arbitrary biases into the generated images by modifying unspecified features while preserving semantic information in the text prompt. This stealthy attack is shown to increase bias levels by 4-8 times and is feasible with costs ranging from 12-18, making it a serious concern given widespread deployment of generative models. The authors evaluate their attack on various triggers, adversary objectives, and biases, discussing potential mitigations and future work. Key findings include the feasibility of this attack using current state-of-the-art generative models. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Imagine a computer program that creates images based on what you tell it to say. Sounds cool? But what if someone could secretly add something to those images that would make them do something bad, like promote propaganda? That’s exactly what this paper shows can happen with certain types of computer models called generative models. The researchers found a way to quietly add biases into the images without changing what they’re supposed to look like. This is concerning because these models are widely used and could be misused for harmful purposes. |
Keywords
* Artificial intelligence * Diffusion * Prompt