Loading Now

Summary of Clave: An Adaptive Framework For Evaluating Values Of Llm Generated Responses, by Jing Yao et al.


CLAVE: An Adaptive Framework for Evaluating Values of LLM Generated Responses

by Jing Yao, Xiaoyuan Yi, Xing Xie

First submitted to arxiv on: 15 Jul 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The rapid progress in Large Language Models (LLMs) raises concerns about generating unethical content, emphasizing the need to assess LLMs’ values and identify misalignments. To overcome challenges in open-ended value evaluation, we introduce CLAVE, a framework combining two complementary LLMs: one large for extracting high-level value concepts from limited human labels, leveraging its generalizability; and one small fine-tuned on these concepts to align with human value understanding. This dual-model approach enables calibration with any value systems using <100 human-labeled samples per value type. We also present ValEval, a comprehensive dataset covering three major value systems, and benchmark the capabilities of 12+ popular LLM evaluators.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large Language Models (LLMs) are getting smarter, but they can create bad content. To stop this, we need to check what values these models have. The problem is that current methods rely on other models or humans to understand what’s good and bad. These methods aren’t perfect because they might be biased themselves. We created a new way to evaluate LLMs’ values by combining two smaller models. One model helps the other understand what’s important. This works better than using just one model. We also made a big dataset with lots of examples for different values. Our results show that our method is better at understanding values.

Keywords

» Artificial intelligence