Loading Now

Summary of Famebias: Embedding Manipulation Bias Attack in Text-to-image Models, by Jaechul Roh et al.


FameBias: Embedding Manipulation Bias Attack in Text-to-Image Models

by Jaechul Roh, Andrew Yuan, Jinsong Mao

First submitted to arxiv on: 24 Dec 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Cryptography and Security (cs.CR); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Medium Difficulty summary: Text-to-Image (T2I) diffusion models have achieved significant advancements in generating high-quality images that align closely with textual descriptions. However, this progress has also raised concerns about the potential misuse of these models for propaganda and other malicious activities. Recent studies reveal that attackers can embed biases into these T2I models through simple fine-tuning, allowing them to generate targeted imagery when triggered by specific phrases. This highlights the risk of T2I models being used as tools for disseminating propaganda, producing images aligned with an attacker’s objective for end-users.
Low GrooveSquid.com (original content) Low Difficulty Summary
Low Difficulty summary: Researchers have been working on a type of artificial intelligence called text-to-image (T2I) models that can generate realistic pictures based on written descriptions. While these models are impressive, they also worry some people because they could be used to spread false information or propaganda. A recent study found that someone with bad intentions could take one of these T2I models and make it produce specific images by “training” it with a few examples. This is concerning because it means the model could be used to create fake pictures that support an attacker’s message.

Keywords

» Artificial intelligence  » Diffusion  » Fine tuning