Loading Now

Summary of Evaluating Human Alignment and Model Faithfulness Of Llm Rationale, by Mohsen Fayyaz et al.


Evaluating Human Alignment and Model Faithfulness of LLM Rationale

by Mohsen Fayyaz, Fan Yin, Jiao Sun, Nanyun Peng

First submitted to arxiv on: 28 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Medium Difficulty summary: We examine how well large language models (LLMs) explain their generations through rationales, specifically focusing on two approaches: prompting-based methods and attribution-based methods that leverage attention or gradients. Our analysis involves three classification datasets with annotated rationales, covering tasks with varying performance levels. The study reveals that prompting-based self-explanations are not always aligned with human rationales, while fine-tuning LLMs to enhance accuracy improves the alignment of attribution-based explanations. We also find that prompting-based self-explanation is less faithful than attribution-based explanations, failing to provide a reliable account of the model’s decision-making process. Our findings emphasize the importance of rigorous evaluations of LLM rationales.
Low GrooveSquid.com (original content) Low Difficulty Summary
Low Difficulty summary: This study looks at how well computers explain their decisions using special sets of words called “rationales.” The researchers compared two ways to get these explanations: one method uses prompts to guide the computer, and the other uses attention or gradients. They tested these methods on three different datasets with varying levels of accuracy. The results show that the first method isn’t always accurate, but the second method gets better when the computer is fine-tuned for more accurate decisions. This means that the computer’s explanations become more reliable as it gets better at making predictions.

Keywords

» Artificial intelligence  » Alignment  » Attention  » Classification  » Fine tuning  » Prompting