Loading Now

Summary of Universal Neurons in Gpt2 Language Models, by Wes Gurnee et al.


Universal Neurons in GPT2 Language Models

by Wes Gurnee, Theo Horsley, Zifan Carl Guo, Tara Rezaei Kheirkhah, Qinyi Sun, Will Hathaway, Neel Nanda, Dimitris Bertsimas

First submitted to arxiv on: 22 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Neural networks learn underlying mechanisms, but how universal are these mechanisms across different models? This paper explores whether individual neurons in GPT2 models trained from different initial random seeds share similar behaviors. We find that 1-5% of neurons consistently activate on the same inputs, despite varying initial conditions. These “universal” neurons typically have clear interpretations and can be grouped into a few families. Our study reveals patterns in neuron weights, showing that these universal neurons play functional roles such as deactivating attention heads, changing token distributions, and predicting next tokens.
Low GrooveSquid.com (original content) Low Difficulty Summary
Neural networks are really good at doing things like recognizing pictures or understanding speech. But have you ever wondered if they’re using the same “rules” to do those tasks? This paper tries to figure out if individual neurons in these networks (which are kind of like tiny building blocks) behave the same way even when the network is started from a different beginning point. They found that some neurons always work together, no matter what starting point they use. These special neurons have clear meanings and can be grouped into categories. This helps us understand how neural networks work and how we can make them better.

Keywords

* Artificial intelligence  * Attention  * Token