Summary of Inspection and Control Of Self-generated-text Recognition Ability in Llama3-8b-instruct, by Christopher Ackerman and Nina Panickssery
Inspection and Control of Self-Generated-Text Recognition Ability in Llama3-8b-Instruct
by Christopher Ackerman, Nina Panickssery
First submitted to arxiv on: 2 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary LLMs can recognize their own writing, which has implications for AI safety. This paper investigates whether this phenomenon is robust, how it’s achieved, and if it can be controlled. The Llama3-8b-Instruct chat model, but not its base version, reliably identifies its own outputs from human-written text. It uses its experience with its own outputs during post-training to succeed in the writing recognition task. We identify a vector in the residual stream that’s differentially activated when the model makes correct judgments and is related to self-authorship. This vector can be used to control the model’s behavior, steering it to claim or disclaim authorship. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary LLMs can recognize their own writing! This paper looks at how they do it and if we can control it. The chat model is really good at recognizing its own writing because it learned from doing so during post-training. We found a special vector that helps the model decide if something was written by itself or someone else. We can even use this vector to make the model say “I wrote this!” or “No, I didn’t!” |