Summary of Test-time Fairness and Robustness in Large Language Models, by Leonardo Cotta and Chris J. Maddison
Test-Time Fairness and Robustness in Large Language Models
by Leonardo Cotta, Chris J. Maddison
First submitted to arxiv on: 11 Jun 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes novel strategies to control social biases in Large Language Models (LLMs) at test time. Frontier LLMs can be discriminatory or sensitive to spurious features, which is a concern given that only well-resourced corporations can train these models. The authors show that existing solutions relying on the model’s implicit understanding of bias are insufficient and propose stratified invariance, a new notion of debiasing that captures debiasing requirements from population level to individual level through an additional measurement. They present a complete observational test for stratified invariance and introduce data augmentation and prompting strategies that guarantee stratified invariance at test time under suitable assumptions. The proposed methods are evaluated on synthetic and real-world benchmarks, demonstrating consistent reduction of bias without requiring additional data or fine-tuning. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper is about making sure large language models don’t have biases against certain groups. These models can be unfair because they’re trained on biased data. To fix this, the authors suggest new ways to control these biases at test time. They show that current methods aren’t good enough and propose a new approach called stratified invariance. This means making sure the model is fair not just for groups as a whole, but also for individual people. The authors also provide tests and strategies to make this happen. They tested their ideas on fake and real data and found that they work well. |
Keywords
» Artificial intelligence » Data augmentation » Fine tuning » Prompting