Loading Now

Summary of Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?, by Zhanke Zhou et al.


Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?

by Zhanke Zhou, Rong Tao, Jianing Zhu, Yiwen Luo, Zengmao Wang, Bo Han

First submitted to arxiv on: 31 Oct 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores an understudied challenge in large language models (LLMs): chain-of-thought prompting with noisy rationales, which contain irrelevant or inaccurate reasoning thoughts within examples used for in-context learning. The authors construct the NoRa dataset to evaluate the robustness of reasoning in the presence of noisy rationales. The study reveals a prevalent vulnerability among current LLMs to such noise, with existing robust methods like self-correction and self-consistency showing limited efficacy. Notably, using clean rationals versus noisy ones leads to significant accuracy drops: 1.4%-19.8% for irrelevant thoughts and 2.2%-40.4% for inaccurate thoughts in the base LLM model.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at a problem with big language models (LLMs). When they’re taught using examples that have incorrect or unnecessary information, they don’t perform as well. The researchers created a special dataset to test how well these models handle this kind of noise. They found that current models are not good at dealing with this kind of noise and that existing methods for making them more robust don’t work very well either. In fact, when the models are taught using incorrect or unnecessary information, they become significantly less accurate.

Keywords

» Artificial intelligence  » Prompting