Summary of Argumentative Large Language Models For Explainable and Contestable Decision-making, by Gabriel Freedman et al.
Argumentative Large Language Models for Explainable and Contestable Decision-Making
by Gabriel Freedman, Adam Dejl, Deniz Gorur, Xiang Yin, Antonio Rago, Francesca Toni
First submitted to arxiv on: 3 May 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper introduces a new approach to using large language models (LLMs) for decision-making by incorporating argumentative reasoning. The authors propose Argumentative LLMs (ArgLLMs), which construct argumentation frameworks that can be used as the basis for formal reasoning in support of decision-making. This allows decisions made by ArgLLMs to be explained and contested, making them more transparent and accountable. The paper evaluates ArgLLMs’ performance experimentally in the context of claim verification, comparing it with state-of-the-art techniques. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about using big language models for decision-making. It’s like a super smart computer that can help us make good choices. But right now, these computers can’t explain why they made certain decisions. This new idea, called Argumentative LLMs (ArgLLMs), tries to fix this by making the computer think more like a human lawyer, building an argument and using logic to support its decision. This means that anyone can understand and even challenge the reasoning behind the computer’s choice. |