Loading Now

Summary of Measuring Progress in Dictionary Learning For Language Model Interpretability with Board Game Models, by Adam Karvonen et al.


Measuring Progress in Dictionary Learning for Language Model Interpretability with Board Game Models

by Adam Karvonen, Benjamin Wright, Can Rager, Rico Angell, Jannik Brinkmann, Logan Smith, Claudio Mayrink Verdun, David Bau, Samuel Marks

First submitted to arxiv on: 31 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores the latent features encoded in language model (LM) representations. Recent work has shown that sparse autoencoders (SAEs) can be effective in disentangling interpretable features from LM representations, but evaluating their quality is challenging due to a lack of ground-truth collections of interpretable features. To address this, the authors propose measuring progress in interpretable dictionary learning using LMs trained on chess and Othello transcripts, which contain natural collections of interpretable features. The authors also introduce a new SAE training technique called p-annealing that improves performance on both prior unsupervised metrics and the new supervised metrics.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is trying to figure out what hidden things are stored in language models when they learn about chess and Othello. Right now, it’s hard to tell if these models are doing a good job of understanding these games because we don’t have any examples of what “good” looks like. The authors are proposing a way to measure how well the models are doing by training them on game transcripts and seeing how well they can recognize things like “there is a knight on F3”. They’re also introducing a new way to train these models that seems to work better.

Keywords

* Artificial intelligence  * Language model  * Supervised  * Unsupervised