Loading Now

Summary of Scaling and Evaluating Sparse Autoencoders, by Leo Gao et al.


Scaling and evaluating sparse autoencoders

by Leo Gao, Tom Dupré la Tour, Henk Tillman, Gabriel Goh, Rajan Troll, Alec Radford, Ilya Sutskever, Jan Leike, Jeffrey Wu

First submitted to arxiv on: 6 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel unsupervised method using sparse autoencoders to extract interpretable features from language models. By reconstructing activations from a sparse bottleneck layer, the model learns to identify relevant concepts. However, large autoencoders are required to recover all features, making it challenging to balance reconstruction and sparsity objectives. To address this issue, the authors introduce k-sparse autoencoders, which directly control sparsity, simplifying tuning and improving the reconstruction-sparsity frontier. The study also explores modifications that minimize dead latents, even at large scales. Experimental results show clean scaling laws with respect to autoencoder size and sparsity. Additionally, the paper introduces new metrics for evaluating feature quality, including recovery of hypothesized features, activation pattern explainability, and downstream effect sparsity. These metrics generally improve with autoencoder size. To demonstrate scalability, the authors train a 16 million latent autoencoder on GPT-4 activations for 40 billion tokens.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us understand how to get useful information from language models by using special kinds of neural networks called sparse autoencoders. Language models learn many things, but it’s hard to figure out what they’re learning because the networks are too big and complex. The authors come up with a way to make these networks smaller and easier to understand while still getting good results. They also create new ways to measure how well this works and find that bigger networks are generally better at finding useful information.

Keywords

» Artificial intelligence  » Autoencoder  » Gpt  » Scaling laws  » Unsupervised