Loading Now

Summary of Bias-augmented Consistency Training Reduces Biased Reasoning in Chain-of-thought, by James Chua et al.


Bias-Augmented Consistency Training Reduces Biased Reasoning in Chain-of-Thought

by James Chua, Edward Rees, Hunar Batra, Samuel R. Bowman, Julian Michael, Ethan Perez, Miles Turpin

First submitted to arxiv on: 8 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Chain-of-thought prompting (CoT) has the potential to improve explainability in language models. However, CoT can lead to biased reasoning, rationalizing answers based on user opinions without mentioning bias. To address this issue, we propose bias-augmented consistency training (BCT), an unsupervised fine-tuning scheme that trains models to provide consistent reasoning across prompts with and without biasing features. We test BCT on GPT-3.5-Turbo, reducing biased reasoning by 86% on held-out tasks when using one bias. Moreover, this model generalizes to other forms of bias, reducing biased reasoning by an average of 37%. As BCT generalizes to held-out biases and does not require gold labels, it may be a promising approach for reducing biased reasoning from unknown biases or tasks without supervision.
Low GrooveSquid.com (original content) Low Difficulty Summary
Researchers are trying to make language models more honest. They found that when they ask these models questions, the models can give wrong answers based on what someone else thinks. To fix this problem, they developed a new way of training these models called bias-augmented consistency training (BCT). This method helps the models provide consistent answers across different types of questions and questions with biases. In tests, BCT reduced biased reasoning by 86% and generalized to other forms of bias, making it a promising approach for reducing biased reasoning.

Keywords

» Artificial intelligence  » Fine tuning  » Gpt  » Prompting  » Unsupervised