Loading Now

Summary of Influence Functions For Scalable Data Attribution in Diffusion Models, by Bruno Mlodozeniec et al.


Influence Functions for Scalable Data Attribution in Diffusion Models

by Bruno Mlodozeniec, Runa Eschenhagen, Juhan Bae, Alexander Immer, David Krueger, Richard Turner

First submitted to arxiv on: 17 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper focuses on improving diffusion models’ interpretability and data attribution capabilities. The authors develop an influence functions framework to predict how a model’s output would change if certain training data were removed or altered. This is achieved by approximating the change in probability of generating a particular example via several proxy measurements. The framework allows for previously proposed methods to be recast as specific design choices, showcasing its versatility and scalability through K-FAC approximations tailored to diffusion models. Evaluation metrics like the Linear Data-modelling Score (LDS) demonstrate that this approach outperforms previous data attribution methods without requiring method-specific hyperparameter tuning.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine you have a special kind of computer program that can create new images or text based on some training data. These programs are called “diffusion models,” and they’re really good at generating realistic-looking content. However, sometimes it’s hard to understand why these programs make certain choices or how they use the training data they’ve learned from. This paper tries to fix this problem by developing a new way to analyze diffusion models and see which parts of their training data are most important for making decisions. By doing so, we can better understand how these programs work and even improve them to generate more realistic content.

Keywords

» Artificial intelligence  » Diffusion  » Hyperparameter  » Probability