Loading Now

Summary of Inductive or Deductive? Rethinking the Fundamental Reasoning Abilities Of Llms, by Kewei Cheng et al.


Inductive or Deductive? Rethinking the Fundamental Reasoning Abilities of LLMs

by Kewei Cheng, Jingfeng Yang, Haoming Jiang, Zhengyang Wang, Binxuan Huang, Ruirui Li, Shiyang Li, Zheng Li, Yifan Gao, Xian Li, Bing Yin, Yizhou Sun

First submitted to arxiv on: 31 Jul 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel framework called SolverLearner that enables Large Language Models (LLMs) to learn underlying functions from in-context examples, allowing for the investigation of true inductive reasoning capabilities. The authors focus on separating inductive and deductive reasoning in LLMs, which is typically blended in existing research. They observe remarkable inductive reasoning abilities through SolverLearner, achieving near-perfect performance with accuracy (ACC) of 1 in most cases. However, surprisingly, they find that LLMs tend to lack deductive reasoning capabilities, particularly in tasks involving counterfactual reasoning.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper explores how Large Language Models (LLMs) can learn and reason by creating a new way for them to figure out relationships between things. Right now, most research on this topic is mixed up, so the authors wanted to see if they could separate two main types of thinking: deductive reasoning (following rules) and inductive reasoning (making connections). They came up with a new tool called SolverLearner that lets LLMs learn from examples and discovered that these models are actually really good at making connections. But, surprisingly, they’re not very good at following rules.

Keywords

» Artificial intelligence