Summary of Or-bench: An Over-refusal Benchmark For Large Language Models, by Justin Cui et al.
OR-Bench: An Over-Refusal Benchmark for Large Language Models
by Justin Cui, Wei-Lin Chiang, Ion Stoica, Cho-Jui Hsieh
First submitted to arxiv on: 31 May 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel method for generating large-scale sets of “seemingly toxic prompts” to measure the over-refusal of Large Language Models (LLMs). Over-refusal occurs when LLMs reject innocuous prompts and become less helpful. The authors introduce OR-Bench, a benchmark comprising 80,000 seemingly toxic prompts across 10 common rejection categories, as well as hard prompts that challenge even state-of-the-art LLMs. They conduct a comprehensive study to measure the over-refusal of 25 popular LLMs across 8 model families. This research aims to help the community develop better safety-aligned models. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Imagine if computers got so good at understanding language that they could make decisions, but sometimes they would say no just because it’s a yes or no question! This can be a problem for artificial intelligence (AI) systems. The researchers in this paper created a special test to see how often AI systems say no when they shouldn’t. They made thousands of fake prompts that seem toxic, like a rude message, but are actually harmless. Then, they tested many different AI systems to see which ones were most likely to say no too much. Their goal is to help make sure AI systems can work safely and accurately. |