Loading Now

Summary of Increasing the Llm Accuracy For Question Answering: Ontologies to the Rescue!, by Dean Allemang et al.


Increasing the LLM Accuracy for Question Answering: Ontologies to the Rescue!

by Dean Allemang, Juan Sequeda

First submitted to arxiv on: 20 May 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Databases (cs.DB); Information Retrieval (cs.IR); Logic in Computer Science (cs.LO)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates how to improve the accuracy and reduce the error rate of Large Language Model (LLM) powered question-answering systems. Specifically, it focuses on using a knowledge graph/semantic representation of an enterprise SQL database, known as Text-to-SPARQL, which has been shown to achieve higher accuracy compared to direct SQL querying. Building on previous research, the authors present two key contributions: Ontology-based Query Check (OBQC) and LLM Repair. OBQC detects errors by leveraging the ontology of the knowledge graph to check if the LLM-generated SPARQL query matches the semantic of the ontology. LLM Repair uses error explanations with an LLM to repair the SPARQL query. The approach is evaluated using the chat with the data benchmark, resulting in a significant increase in overall accuracy from 54% to 72%, with an additional 8% unknown results. This represents a 20% error rate reduction.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper looks at how to make question-answering systems more accurate and reliable. It uses special computer programs called Large Language Models that can understand natural language and answer questions. These models are good at answering questions when they have access to lots of information, like a big database. The authors show that by using this database in a special way, called Text-to-SPARQL, the accuracy goes up. They then suggest two ways to make it even better: checking if the answers match what’s expected and fixing mistakes. This helps get the right answer 72% of the time, which is a big improvement.

Keywords

» Artificial intelligence  » Knowledge graph  » Large language model  » Question answering