Loading Now

Summary of Stackfeed: Structured Textual Actor-critic Knowledge Base Editing with Feedback, by Naman Gupta et al.


STACKFEED: Structured Textual Actor-Critic Knowledge Base Editing with FeedBack

by Naman Gupta, Shashank Kirtania, Priyanshu Gupta, Krishna Kariya, Sumit Gulwani, Arun Iyer, Suresh Parthasarathy, Arjun Radhakrishna, Sriram K. Rajamani, Gustavo Soares

First submitted to arxiv on: 14 Oct 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Machine Learning (cs.LG); Multiagent Systems (cs.MA)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Large Language Models (LLMs) frequently generate outdated or incorrect information, particularly in low-resource settings or when dealing with private data. To address this, Retrieval-Augmented Generation (RAG) utilizes external knowledge bases (KBs), but these can also suffer from inaccuracies. Our novel approach, STACKFEED, iteratively refines the KB based on expert feedback using a multi-actor, centralized critic reinforcement learning framework. Each document is assigned to an actor, modeled as a ReACT agent, which performs structured edits based on targeted instructions from a centralized critic. Our experimental results demonstrate that STACKFEED significantly improves KB quality and RAG system performance, enhancing accuracy by up to 8% over baselines.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large Language Models often get facts wrong or outdated. This happens especially when they’re dealing with private data or not enough information is available. To fix this, we developed a new way to improve knowledge bases using feedback from experts. Our approach, called STACKFEED, uses multiple agents that work together to refine the knowledge base and make it more accurate. We tested our method and found that it can improve the accuracy of Large Language Models by up to 8% compared to other approaches.

Keywords

» Artificial intelligence  » Knowledge base  » Rag  » Reinforcement learning  » Retrieval augmented generation