Loading Now

Summary of Knowing Your Nonlinearities: Shapley Interactions Reveal the Underlying Structure Of Data, by Divyansh Singhvi et al.


Knowing Your Nonlinearities: Shapley Interactions Reveal the Underlying Structure of Data

by Divyansh Singhvi, Andrej Erkelens, Raghav Jain, Diganta Misra, Naomi Saphra

First submitted to arxiv on: 19 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper explores the concept of measuring nonlinear feature interaction in various models, focusing on Shapley Taylor interaction indices (STII). The authors analyze how underlying data structure affects model representations across different modalities, tasks, and architectures. In language models, they find that STII increases within idiomatic expressions and MLMs rely more on syntax than ALMs. Speech model findings reflect the phonetic principal of oral cavity openness. Additionally, image classifiers illustrate feature interactions reflecting object boundaries. This interdisciplinary research demonstrates the benefits of domain expertise in interpretability.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how to understand complex patterns in different models. It uses a special way called Shapley Taylor interaction indices (STII) to study how data structure affects what these models learn. They look at language models, which are really good at understanding human language, and find that some parts of language are more important than others. The authors also study speech models, which try to understand how we speak, and image classifiers, which can recognize objects in pictures. This research shows that by combining different fields of knowledge, we can learn more about how these complex models work.

Keywords

* Artificial intelligence  * Syntax