Summary of Defeaters and Eliminative Argumentation in Assurance 2.0, by Robin Bloomfield et al.
Defeaters and Eliminative Argumentation in Assurance 2.0
by Robin Bloomfield, Kate Netkachova, John Rushby
First submitted to arxiv on: 16 May 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: Logic in Computer Science (cs.LO)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers explore how assurance cases can be used in machine learning (ML) applications. They argue that traditional assurance cases can be adapted to ML models by using positive arguments that rely on evidence and assumptions. The authors propose a framework for building such cases, which involves reasoning steps, grounded on data and assumptions, that support a top claim with external significance. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Machine learning is a type of artificial intelligence (AI) that enables computers to learn from experience and make decisions without being explicitly programmed. This paper shows how ML can be used in assurance cases to help humans evaluate the quality and reliability of AI models. The authors believe that this approach will increase trust in AI decision-making and improve its adoption. |
Keywords
» Artificial intelligence » Machine learning