Loading Now

Summary of Reasonagain: Using Extractable Symbolic Programs to Evaluate Mathematical Reasoning, by Xiaodong Yu et al.


ReasonAgain: Using Extractable Symbolic Programs to Evaluate Mathematical Reasoning

by Xiaodong Yu, Ben Zhou, Hao Cheng, Dan Roth

First submitted to arxiv on: 24 Oct 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a new approach to evaluating the mathematical reasoning abilities of large language models (LLMs) by using symbolic programs as a means for automated evaluation. The authors extract executable programs from popular math datasets, such as GSM8K and MATH, using GPT4-o, and verify them using original input-output pairs. They then prompt GPT4-o to generate new questions based on the extracted program, creating alternative input-output pairs that test the LLMs’ ability to reason correctly. The results show significant accuracy drops when using this proposed evaluation method compared to traditional static examples, suggesting the fragility of math reasoning in state-of-the-art LLMs.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us understand how good large language models are at doing math problems. Right now, we test these models by seeing if they get the right answer or can explain their thinking. But this isn’t very fair because it doesn’t show when the model is using tricks or making mistakes. The authors of this paper came up with a new way to test these models by creating special programs that can be used to solve math problems. They tested some popular language models and found out that they’re not as good at math as we thought.

Keywords

» Artificial intelligence  » Prompt