Summary of Contestable Ai Needs Computational Argumentation, by Francesco Leofante and Hamed Ayoobi and Adam Dejl and Gabriel Freedman and Deniz Gorur and Junqi Jiang and Guilherme Paulino-passos and Antonio Rago and Anna Rapberger and Fabrizio Russo and Xiang Yin and Dekai Zhang and Francesca Toni
Contestable AI needs Computational Argumentation
by Francesco Leofante, Hamed Ayoobi, Adam Dejl, Gabriel Freedman, Deniz Gorur, Junqi Jiang, Guilherme Paulino-Passos, Antonio Rago, Anna Rapberger, Fabrizio Russo, Xiang Yin, Dekai Zhang, Francesca Toni
First submitted to arxiv on: 17 May 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper explores how AI systems can be made contestable, aligning with guidelines and regulations promoting transparency in automated decision-making. It argues that contestable AI requires dynamic explainability and decision-making processes, enabling machines to interact with humans or other machines to justify their outputs and assess grounds for challenge. The authors propose computational argumentation as a suitable approach to support this rethinking of the AI landscape. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper is about making artificial intelligence (AI) more open to questions and challenges. Right now, most AI systems don’t allow for debate or disagreement. But experts say that’s not good enough. They think AI should be able to explain itself and change its decisions if people disagree. The authors suggest a new way of doing AI that allows machines to discuss with humans and each other about their choices. |