Loading Now

Summary of Capturing the Temporal Dependence Of Training Data Influence, by Jiachen T. Wang et al.


Capturing the Temporal Dependence of Training Data Influence

by Jiachen T. Wang, Dawn Song, James Zou, Prateek Mittal, Ruoxi Jia

First submitted to arxiv on: 12 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper addresses a critical issue in machine learning: capturing the dependence of data influence on the optimization trajectory during training. Traditional methods assume permutation-invariance, but modern training paradigms violate this assumption due to sensitivity to data ordering. The authors propose trajectory-specific leave-one-out (LOO) influence, which quantifies the impact of removing a data point from a specific iteration during training, accounting for the exact sequence of data encountered and the model’s optimization trajectory. To efficiently approximate LOO, they introduce data value embedding, a novel technique that computes a training data embedding capturing cumulative interactions between data and evolving model parameters. This allows for efficient approximation of LOO through a dot-product between the data value embedding and the gradient of the given test data. The paper reveals distinct phases of data influence, showing early and late-stage data points exert greater impact on the final model. These insights translate into actionable strategies for managing computational overhead in data selection.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about understanding how data affects a machine learning model as it’s being trained. Most current methods assume that the order of training data doesn’t matter, but this isn’t always true. The authors introduce a new way to measure the influence of individual data points on the model at different stages of its training. They also develop a method called “data value embedding” to efficiently calculate these influences. This can help us better understand how models are trained and even provide ways to make them more efficient.

Keywords

» Artificial intelligence  » Dot product  » Embedding  » Machine learning  » Optimization