Loading Now

Summary of Revisit, Extend, and Enhance Hessian-free Influence Functions, by Ziao Yang et al.


Revisit, Extend, and Enhance Hessian-Free Influence Functions

by Ziao Yang, Han Yue, Jian Chen, Hongfu Liu

First submitted to arxiv on: 25 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel approach to estimating sample influence in deep models is proposed, addressing the challenges posed by non-convex loss functions and massive parameter sizes. By exploiting the first-order Taylor extension, influence functions can be computed efficiently without retraining the model. However, applying these functions directly to deep models has been hindered by the need for costly Hessian matrix inversion. To overcome this limitation, various approximations have been developed, including TracIn, a simple yet effective method that substitutes the inverse of the Hessian with an identity matrix. This paper provides insights into the efficacy of TracIn and extends its applications to fairness and robustness considerations.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine you want to know how much each data point is affecting a machine learning model’s decisions. This process, called “influence functions,” helps identify which samples are most important for making accurate predictions. The problem is that these calculations can be very complex and time-consuming when dealing with deep neural networks. To solve this issue, researchers have developed shortcuts to make the calculations more efficient. One such method is TracIn, which replaces a tricky mathematical operation called “Hessian matrix inversion” with something much simpler. This paper explores why TracIn works well and shows how it can be used for more than just understanding model decisions.

Keywords

» Artificial intelligence  » Machine learning