Loading Now

Summary of Attention Mechanisms Don’t Learn Additive Models: Rethinking Feature Importance For Transformers, by Tobias Leemann et al.


Attention Mechanisms Don’t Learn Additive Models: Rethinking Feature Importance for Transformers

by Tobias Leemann, Alina Fastowski, Felix Pfeiffer, Gjergji Kasneci

First submitted to arxiv on: 22 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper addresses the challenge of applying feature attribution methods to transformer architectures, which are widely used in natural language processing and other applications. Traditional attribution methods rely on linear or additive surrogate models to explain a model’s output, but these models are incompatible with the transformer architecture. The authors introduce the Softmax-Linked Additive Log Odds Model (SLALOM), a novel surrogate model designed for transformers. SLALOM provides insights into the relationship between input features and model outputs, outperforming competing methods in terms of fidelity or computational efficiency.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about explaining how artificial intelligence models work. It talks about a special kind of AI called transformers that are really good at understanding language. But these transformers don’t work well with traditional methods for figuring out which parts of the input data mattered most. To fix this, the authors create a new method called SLALOM that’s specifically designed to work with transformers. SLALOM helps us understand how the model makes decisions and is better than other methods at doing so.

Keywords

» Artificial intelligence  » Natural language processing  » Softmax  » Transformer