Summary of Investigating Bias Representations in Llama 2 Chat Via Activation Steering, by Dawn Lu et al.
Investigating Bias Representations in Llama 2 Chat via Activation Steering
by Dawn Lu, Nina Rimsky
First submitted to arxiv on: 1 Feb 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper investigates the presence of societal bias in Large Language Models (LLMs), specifically the Llama 2 7B Chat model. As these models increasingly influence decision-making processes with significant social implications, it is crucial to ensure they do not perpetuate existing biases. The authors employ an activation steering approach to detect and mitigate biases related to gender, race, and religion. This method manipulates model activations using steering vectors derived from the StereoSet dataset and custom GPT4-generated prompts. The study reveals inherent gender bias in Llama 2 7B Chat, even after Reinforcement Learning from Human Feedback (RLHF). Additionally, the authors observe a predictable negative correlation between bias and the model’s tendency to refuse responses. Notably, RLHF tends to increase the similarity in the model’s representation of different forms of societal biases, raising questions about the model’s nuanced understanding of these biases. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper looks at how Large Language Models can be biased against certain groups, like women or minorities. The authors test a new way to detect and fix this bias using something called activation steering. They find that the Llama 2 7B Chat model has gender bias even when it’s been trained to be more accurate. The study also shows that when humans try to teach the model not to be biased, it actually makes the problem worse. |
Keywords
* Artificial intelligence * Llama * Reinforcement learning from human feedback * Rlhf