Loading Now

Summary of Rate, Explain and Cite (rec): Enhanced Explanation and Attribution in Automatic Evaluation by Large Language Models, By Aliyah R. Hsu et al.


Rate, Explain and Cite (REC): Enhanced Explanation and Attribution in Automatic Evaluation by Large Language Models

by Aliyah R. Hsu, James Zhu, Zhichao Wang, Bin Bi, Shubham Mehrotra, Shiva K. Pentyala, Katherine Tan, Xiang-Bo Mao, Roshanak Omrani, Sougata Chaudhuri, Regunathan Radhakrishnan, Sitaram Asur, Claire Na Cheng, Bin Yu

First submitted to arxiv on: 3 Nov 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces three fine-tuned general-purpose LLM autoevaluators to evaluate generated text across various dimensions, including faithfulness, instruction following, coherence, and completeness. These models provide ratings for these metrics while offering detailed explanations and verifiable citations, enhancing trust in the content. The auto-evaluators support different citation modes, accommodating latency and granularity requirements. Experimental results demonstrate that the general-purpose LLM auto-evaluator, REC-70B, outperforms state-of-the-art LLMs in content evaluation, achieving Rank #1 on the RewardBench leaderboard under the model name TextEval-Llama3.1-70B.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about creating special computer models to check if text generated by other AI models is accurate and trustworthy. These “evaluators” can measure things like how true-to-life the generated text is, how well it follows instructions, and how coherent it is. They also provide explanations for their ratings and give references that can be checked. This makes it easier to trust the generated text. The best model, called REC-70B, does a better job than other similar models at doing this task.

Keywords

» Artificial intelligence