Summary of A Is For Absorption: Studying Feature Splitting and Absorption in Sparse Autoencoders, by David Chanin et al.
A is for Absorption: Studying Feature Splitting and Absorption in Sparse Autoencoders
by David Chanin, James Wilken-Smith, Tomáš Dulka, Hardik Bhatnagar, Joseph Bloom
First submitted to arxiv on: 22 Sep 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates the efficacy of Sparse Autoencoders (SAEs) in decomposing Large Language Model (LLM) activations into human-interpretable latents. The authors pose two questions: how well do SAEs extract monosemantic and interpretable latents, and does varying sparsity or size affect this quality? To answer these questions, the researchers employ a simple first-letter identification task with complete ground truth labels. Their findings indicate that feature-splitting, specifically “feature absorption,” is a problematic issue where seemingly monosemantic latents fail to activate as expected. The study suggests that merely adjusting SAE size or sparsity is insufficient to resolve this problem and highlights the need for deeper conceptual understanding. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how well machines can break down what they’ve learned into simple ideas that people can understand. They use a special kind of machine learning model called Sparse Autoencoders (SAEs) to try to make sense of what large language models are thinking. The researchers ask two big questions: do SAEs do a good job of breaking things down, and does making the SAE bigger or sparser help or hurt this process? To answer these questions, they use a simple game where machines have to identify letters at the start of words. What they found was that sometimes the machine’s attempts to simplify what it learned just don’t work very well. |
Keywords
» Artificial intelligence » Large language model » Machine learning