Summary of Reasoninglm: Enabling Structural Subgraph Reasoning in Pre-trained Language Models For Question Answering Over Knowledge Graph, by Jinhao Jiang et al.
ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained Language Models for Question Answering over Knowledge Graph
by Jinhao Jiang, Kun Zhou, Wayne Xin Zhao, Yaliang Li, Ji-Rong Wen
First submitted to arxiv on: 30 Dec 2023
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel approach to Question Answering over Knowledge Graphs (KGQA), which aims to answer natural language questions by reasoning on large-scale knowledge graphs. The authors simplify the two-module approach, typically consisting of pre-trained language models and graph neural networks, by developing a more capable pre-trained language model called ReasoningLM. This model integrates subgraph-aware self-attention mechanisms for performing structured reasoning and an adaptation tuning strategy to adapt model parameters with synthesized questions. Experimental results show that ReasoningLM outperforms state-of-the-art models with fewer updated parameters and less training data. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us answer questions using big databases of knowledge. It’s like asking a super smart librarian who knows everything about the world! The authors are trying to make computers better at answering complex questions by giving them a special kind of brain called ReasoningLM. This brain is really good at understanding how things are connected in these huge knowledge databases, and it can even learn from smaller bits of information. The results show that this new brain is way better than the old ones, even when we don’t give it as much training! |
Keywords
» Artificial intelligence » Language model » Question answering » Self attention