Loading Now

Summary of Hierarchical Deconstruction Of Llm Reasoning: a Graph-based Framework For Analyzing Knowledge Utilization, by Miyoung Ko et al.


Hierarchical Deconstruction of LLM Reasoning: A Graph-Based Framework for Analyzing Knowledge Utilization

by Miyoung Ko, Sue Hyun Park, Joonsuk Park, Minjoon Seo

First submitted to arxiv on: 27 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed method deconstructs complex real-world questions into a graph, representing each question as a node with predecessors of background knowledge needed to solve the question. The DepthQA dataset is developed, deconstructing questions into three depths: recalling conceptual knowledge, applying procedural knowledge, and analyzing strategic knowledge. Quantifying forward and backward discrepancies in large language model (LLM) performance on simpler sub-problems versus complex questions reveals smaller models exhibit more discrepancies than larger models. Patterns of discrepancies are observed across model capacity and possibility of training data memorization. Guiding models from simpler to complex questions through multi-turn interactions improves performance across model sizes, highlighting the importance of structured intermediate steps in knowledge reasoning.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models (LLMs) are super smart computers that can answer many types of questions. But did you know how they come up with their answers is still a mystery? Researchers tried to figure out how LLMs think by breaking down big questions into smaller pieces, like puzzle pieces. They made a special set of questions called DepthQA and tested different-sized models on these questions. Surprisingly, the small models had trouble with easy questions but got stuck when faced with harder ones! The researchers found that guiding the models to start with simpler questions and then move on to harder ones helped them get better at answering questions overall.

Keywords

» Artificial intelligence  » Large language model