Summary of Human Evaluation Of Procedural Knowledge Graph Extraction From Text with Large Language Models, by Valentina Anita Carriero and Antonia Azzini and Ilaria Baroni and Mario Scrocca and Irene Celino
Human Evaluation of Procedural Knowledge Graph Extraction from Text with Large Language Models
by Valentina Anita Carriero, Antonia Azzini, Ilaria Baroni, Mario Scrocca, Irene Celino
First submitted to arxiv on: 27 Nov 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: Computation and Language (cs.CL); Human-Computer Interaction (cs.HC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel approach is proposed for representing procedural knowledge in a Knowledge Graph (KG) using Large Language Model (LLM) capabilities. The method leverages prompt engineering to extract steps, actions, objects, equipment, and temporal information from textual procedures. This extracted information is then used to populate the Procedural KG according to a pre-defined ontology. The quality and usefulness of the LLM-extracted procedural knowledge are evaluated through a user study, assessing both quantitative and qualitative measures. The results show that LLMs can produce outputs of acceptable quality, and human evaluators’ subjective perception of AI is assessed. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Procedural knowledge is about knowing how to do things step by step. Right now, this information is often written in natural language texts, like recipes or instructions. To make it easier for people to use these procedures, we want to put the steps and details into a special kind of map called a Knowledge Graph (KG). We’re using powerful computer models called Large Language Models (LLMs) to help us do this. Our approach involves giving the LLMs specific tasks (or “prompts”) to extract the important information from texts, like what needs to be done and when. We then tested how well our method works by asking people what they think of the results. The good news is that the computer models can produce useful information, and we get to learn more about how humans perceive AI. |
Keywords
» Artificial intelligence » Knowledge graph » Large language model » Prompt