Summary of Narrative-of-thought: Improving Temporal Reasoning Of Large Language Models Via Recounted Narratives, by Xinliang Frederick Zhang et al.
Narrative-of-Thought: Improving Temporal Reasoning of Large Language Models via Recounted Narratives
by Xinliang Frederick Zhang, Nick Beauchamp, Lu Wang
First submitted to arxiv on: 7 Oct 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Temporal reasoning, a fundamental aspect of human cognition, remains challenging for large language models (LLMs) despite their impressive performance in various tasks. This paper focuses on temporal graph generation, a crucial task in temporal reasoning, and studies the limitations of LLMs, including GPT-3.5/4, in this domain. The results show that even powerful models struggle with this task, while smaller models (<10B) lag behind by 50%. To bridge this gap, the authors propose Narrative-of-Thought (NoT), a prompting technique tailored for temporal reasoning. NoT converts events into Python classes and guides small models to generate temporally grounded narratives, ultimately producing temporal graphs. The paper presents extensive experiments demonstrating the effectiveness of NoT in improving various metrics, including F1 scores on the Schema-11 evaluation set and structural similarity. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research looks at how computers can understand time and relationships between events. Right now, even very smart computer programs struggle with this task because it’s so complex. The authors of this paper wanted to find a way for smaller computer models to be better at understanding time and events, without needing to make the models bigger or more powerful. They came up with an idea called Narrative-of-Thought (NoT), which helps small models understand time by converting events into simple language and guiding them to create a timeline. The results show that NoT is very effective in improving how well computer models can understand time and relationships. |
Keywords
» Artificial intelligence » Gpt » Prompting