Loading Now

Summary of Disentangling Logic: the Role Of Context in Large Language Model Reasoning Capabilities, by Wenyue Hua et al.


Disentangling Logic: The Role of Context in Large Language Model Reasoning Capabilities

by Wenyue Hua, Kaijie Zhu, Lingyao Li, Lizhou Fan, Shuhang Lin, Mingyu Jin, Haochen Xue, Zelong Li, JinDong Wang, Yongfeng Zhang

First submitted to arxiv on: 4 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The study aims to distinguish between pure logical reasoning and text understanding by analyzing abstract and contextualized logical problems across various domains. Researchers investigate whether large language models (LLMs) demonstrate genuine reasoning capabilities when the underlying logical structure remains constant, focusing on standard propositional logic, including deductive and abductive reasoning. The study constructs datasets for deductive and abductive reasoning with 4 levels of difficulty, covering 12 distinct categories or domains based on Wikipedia categorization. Experiments aim to provide insights into disentangling context in logical reasoning and the true reasoning capabilities of LLMs and their generalization potential.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research looks at how well computers can reason logically by giving them problems from different areas like science, history, and more. The scientists want to know if the computer’s ability to solve these problems is because it’s good at logic or just because it’s learned from context. They’re also looking at whether fine-tuning the computer on one type of problem helps it with similar problems in a different area.

Keywords

» Artificial intelligence  » Fine tuning  » Generalization