Summary of Generate-on-graph: Treat Llm As Both Agent and Kg in Incomplete Knowledge Graph Question Answering, by Yao Xu et al.
Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering
by Yao Xu, Shizhu He, Jiabei Chen, Zihao Wang, Yangqiu Song, Hanghang Tong, Guang Liu, Kang Liu, Jun Zhao
First submitted to arxiv on: 23 Apr 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed method, Generate-on-Graph (GoG), integrates Large Language Models (LLMs) with Knowledge Graphs (KGs) to address the issues of insufficient knowledge and hallucination in LLMs. GoG is a training-free approach that generates new factual triples while exploring KGs, simulating real-world scenarios where KGs are incomplete. This method treats LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering (IKGQA). Experimental results on two datasets show that GoG outperforms previous methods. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary To solve problems with Large Language Models (LLMs), researchers are working together LLMs and special databases called Knowledge Graphs (KGs). However, these approaches usually test how well they do on simple question-answering tasks where the database has all the information needed to answer each question. In real life, these databases often miss important details. To make things more realistic, we propose a new way to evaluate LLMs: asking questions that require incomplete databases. We built special datasets for this and developed a method called Generate-on-Graph (GoG) that can generate new information based on the database without needing any training. GoG uses a framework that combines thinking, searching, and generating to treat the LLM like both an agent and a database. Our tests show that GoG performs better than previous methods. |
Keywords
» Artificial intelligence » Hallucination » Knowledge graph » Question answering