Loading Now

Summary of Sparse Autoencoders Reveal Temporal Difference Learning in Large Language Models, by Can Demircan et al.


Sparse Autoencoders Reveal Temporal Difference Learning in Large Language Models

by Can Demircan, Tankred Saanum, Akshay K. Jagadish, Marcel Binz, Eric Schulz

First submitted to arxiv on: 2 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper explores the mechanism behind large language models’ (LLMs) ability to adapt to specific problems, such as reinforcement learning (RL) tasks, through in-context learning. It demonstrates that Llama 3 70B can solve simple RL problems by analyzing the residual stream using Sparse Autoencoders (SAEs). The findings reveal representations that closely match temporal difference (TD) errors and are causally involved in computing TD errors and Q-values. This work establishes a methodology for studying and manipulating in-context learning with SAEs, providing insights into how LLMs learn to solve specific problems.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper looks at how big language models can learn to do new things just by looking at a few examples. It’s like when you’re playing a game and you figure out the rules after seeing a few moves. The model, called Llama 3 70B, is really good at this kind of learning. By studying what’s going on inside the model, scientists found that it creates special patterns in its brain (called representations) that help it solve problems. These patterns are important for figuring out how to do things like play games or make decisions.

Keywords

* Artificial intelligence  * Llama  * Reinforcement learning