Loading Now

Summary of Invdiff: Invariant Guidance For Bias Mitigation in Diffusion Models, by Min Hou et al.


InvDiff: Invariant Guidance for Bias Mitigation in Diffusion Models

by Min Hou, Yueying Wu, Chang Xu, Yu-Hao Huang, Chenxi Bai, Le Wu, Jiang Bian

First submitted to arxiv on: 11 Dec 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Information Retrieval (cs.IR); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses the issue of bias in pre-trained diffusion models, which can inherit imbalances and biases present in real-world data. Despite previous attempts to mitigate these issues using text prompts or bias labels, unknown biases are common in real-world scenarios. The proposed framework, InvDiff, learns invariant semantic information for diffusion guidance by identifying underlying biases in training data and designing a debiasing objective. A lightweight trainable module is employed to preserve invariant semantic information and guide the diffusion model’s sampling process toward unbiased outcomes. Experimental results on three benchmarks demonstrate that InvDiff reduces bias while maintaining image quality. The code is available at this GitHub URL.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper talks about a problem with computer models called diffusion models. These models can sometimes make biased or unfair decisions, like showing different images to people based on their gender or race. The researchers propose a new way to fix this issue without needing extra information about the biases. They suggest learning how to ignore certain features that are not important and focus on what matters. This helps the model produce more fair and unbiased results. They tested their method on several sets of data and found it worked well.

Keywords

» Artificial intelligence  » Diffusion  » Diffusion model