Loading Now

Summary of Evaluating Open-source Sparse Autoencoders on Disentangling Factual Knowledge in Gpt-2 Small, by Maheep Chaudhary and Atticus Geiger


Evaluating Open-Source Sparse Autoencoders on Disentangling Factual Knowledge in GPT-2 Small

by Maheep Chaudhary, Atticus Geiger

First submitted to arxiv on: 5 Sep 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Neural and Evolutionary Computing (cs.NE)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the effectiveness of high-dimensional sparse autoencoders (SAEs) in mechanistic interpretability, specifically in causal analysis. The authors train SAEs on neuron activations and use the resulting features as atomic units for analyzing hidden representations from GPT-2 small models. They evaluate four open-source SAEs against each other, neurons, and linear features learned via distributed alignment search (DAS) using the RAVEL benchmark. The results show that SAEs struggle to reach the neuron baseline, with none approaching the DAS skyline.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how good a new way of understanding neural networks is for figuring out why certain things happen in the world. They’re trying different methods to see if they can make sense of what’s going on inside a special kind of AI model called GPT-2 small. They tested four different ways of doing this and found that none of them were very good, not even close to being as good as just looking at the individual parts of the network.

Keywords

» Artificial intelligence  » Alignment  » Gpt