Summary of Decomposing and Editing Predictions by Modeling Model Computation, By Harshay Shah et al.
Decomposing and Editing Predictions by Modeling Model Computation
by Harshay Shah, Andrew Ilyas, Aleksander Madry
First submitted to arxiv on: 17 Apr 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper introduces component modeling, a task aimed at understanding how machine learning models process inputs to produce predictions. The authors focus on a specific aspect of this task, called component attribution, which involves estimating the impact of individual components (such as convolution filters or attention heads) on a given prediction. To achieve this, they propose COAR, a scalable algorithm that can be applied across various models, datasets, and modalities. The authors demonstrate the effectiveness of COAR by showing its ability to enable model editing for diverse tasks, including fixing errors, forgetting specific classes, boosting robustness, localizing attacks, and improving typographic attack resistance. By decomposing an ML model’s computation into its component parts, the paper provides a new perspective on how models operate and can be improved. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about understanding how machine learning models make predictions. It’s like trying to figure out how a car works by breaking it down into smaller parts. The authors focus on one part of this process called component attribution, which means figuring out how each small piece (or “component”) affects the prediction. They also create an algorithm called COAR that can do this for many different types of models and data. By using COAR, they show that it’s possible to fix problems with the model, make it more robust, or even trick it into making mistakes. The paper helps us understand how machine learning models work by looking at what makes them tick. |
Keywords
» Artificial intelligence » Attention » Boosting » Machine learning