Loading Now

Summary of Mathscale: Scaling Instruction Tuning For Mathematical Reasoning, by Zhengyang Tang et al.


MathScale: Scaling Instruction Tuning for Mathematical Reasoning

by Zhengyang Tang, Xingxing Zhang, Benyou Wang, Furu Wei

First submitted to arxiv on: 5 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed method, MathScale, aims to improve large language models’ (LLMs) capabilities in solving mathematical problems by generating high-quality mathematical reasoning data using frontier LLMs like GPT-3.5. Inspired by human mathematical learning, MathScale extracts topics and knowledge points from seed math questions, builds a concept graph, and generates new math questions. The method demonstrates scalability along the size axis of the math dataset, resulting in a 2-million-pair math question-answer dataset (MathScaleQA). To comprehensively evaluate LLMs’ mathematical reasoning abilities, a benchmark (MwpBench) is constructed, featuring ten datasets covering K-12, college, and competition-level math problems. Fine-tuning open-source LLMs with MathScaleQA leads to significantly improved performance in mathematical reasoning, with MathScale-7B achieving state-of-the-art results on MwpBench.
Low GrooveSquid.com (original content) Low Difficulty Summary
MathScale creates a way for language models to learn about math by generating new math questions based on what they already know. This helps them get better at solving math problems. The method uses big language models like GPT-3.5 and creates a huge dataset of 2 million math question-answer pairs. It also makes a benchmark with different types of math problems to test how well the models do.

Keywords

* Artificial intelligence  * Fine tuning  * Gpt