Loading Now

Summary of Citation-enhanced Generation For Llm-based Chatbots, by Weitao Li et al.


Citation-Enhanced Generation for LLM-based Chatbots

by Weitao Li, Junkai Li, Weizhi Ma, Yang Liu

First submitted to arxiv on: 25 Feb 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach to addressing the issue of hallucinated content in large language model-based chatbots. The authors suggest that current efforts to alleviate this problem, such as retrieval-augmented generation and reinforcement learning with human feedback, require additional training and data annotation. Instead, they propose a post-hoc Citation-Enhanced Generation (CEG) approach that incorporates a retrieval module and natural language inference-based citation generation module. This method can regenerate responses until all statements are supported by citations, making it a plug-and-play plugin that can be used with various large language models. The authors evaluate their framework on three benchmarks and show that it outperforms state-of-the-art methods in both hallucination detection and response regeneration.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about how to make chatbots better at responding to questions without making things up. Chatbots are really good at talking, but sometimes they say things that aren’t true. The researchers who wrote this paper want to fix that problem. They came up with a new way to check if the chatbot’s answers have enough evidence to back them up. This method is special because it doesn’t need any extra training or data. It just works by looking for more information when the chatbot can’t find enough to support its answer.

Keywords

» Artificial intelligence  » Hallucination  » Inference  » Large language model  » Reinforcement learning  » Retrieval augmented generation