Summary of Towards Inference-time Category-wise Safety Steering For Large Language Models, by Amrita Bhattacharjee et al.
Towards Inference-time Category-wise Safety Steering for Large Language Models
by Amrita Bhattacharjee, Shaona Ghosh, Traian Rebedea, Christopher Parisien
First submitted to arxiv on: 2 Oct 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Large language models (LLMs) have made significant progress in various applications, but safety alignment remains an active area of research. Despite extensive training and safety measures, LLMs are still vulnerable to misbehavior. Recent work has explored mechanistic interpretability to induce desired concepts in LLM outputs, but its applicability for safety is under-explored. This paper proposes a novel approach to safety steering using category-specific vectors for fine-grained control and sophisticated methods for extracting informative vectors while retaining text quality. The proposed method is demonstrated on multiple LLMs and datasets, showcasing its effectiveness. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Large language models are getting better at understanding and generating human-like text. But some of these models can also be misused or say things they shouldn’t. Researchers have been trying to find ways to keep these models safe and controlled. This paper introduces a new method for steering the output of large language models, allowing us to control what kind of text is generated while still keeping it high-quality. The authors test their approach on multiple models and datasets, showing that it’s effective in keeping the models safe. |
Keywords
» Artificial intelligence » Alignment