Loading Now

Summary of Enhancing Logical Reasoning in Large Language Models Through Graph-based Synthetic Data, by Jiaming Zhou et al.


Enhancing Logical Reasoning in Large Language Models through Graph-based Synthetic Data

by Jiaming Zhou, Abbas Ghaddar, Ge Zhang, Liheng Ma, Yaochen Hu, Soumyasundar Pal, Mark Coates, Bin Wang, Yingxue Zhang, Jianye Hao

First submitted to arxiv on: 19 Sep 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates the use of graph-based synthetic reasoning data to improve the logical reasoning abilities of Large Language Models (LLMs). Despite recent advancements in training and prompting strategies, LLMs still struggle with complex tasks that require long chains of reasoning. The authors explore whether using synthetic graph-based data as training signals can enhance LLMs’ reasoning capabilities without affecting their performance on other benchmarks. They conduct extensive experiments on two natural language reasoning tasks: inductive reasoning and spatial reasoning. The results show that supervised fine-tuning (SFT) with synthetic graph-based data effectively improves LLMs’ reasoning performance without compromising their effectiveness on standard evaluation benchmarks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps Large Language Models (LLMs) think better by using special training data. LLMs are good at some things, but they have trouble with complex problems that need lots of thinking ahead. The researchers wanted to see if using fake data that looks like real reasoning problems could help LLMs do better on these kinds of tasks. They tested it on two types of problems: figuring out rules and understanding spatial relationships. The results show that this training method can make LLMs better at reasoning without making them worse at other things.

Keywords

» Artificial intelligence  » Fine tuning  » Prompting  » Supervised