Loading Now

Summary of Even-if Explanations: Formal Foundations, Priorities and Complexity, by Gianvincenzo Alfano et al.


Even-if Explanations: Formal Foundations, Priorities and Complexity

by Gianvincenzo Alfano, Sergio Greco, Domenico Mandaglio, Francesco Parisi, Reza Shahbazian, Irina Trubitsyna

First submitted to arxiv on: 17 Jan 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates explainable AI, specifically local post-hoc explainability queries that aim to understand why individual inputs are classified by a machine learning model. The research focuses on semifactual explanations, which have received less attention than counterfactuals. The study shows that linear and tree-based models are more interpretable than neural networks. A preference-based framework is introduced, allowing users to personalize explanations based on their preferences, enhancing interpretability and user-centricity. The complexity of several interpretability problems in this framework is explored, with algorithms provided for polynomial cases.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper makes AI more understandable by looking at how machine learning models work. It tries to answer questions like “Why did the model classify this input this way?” by using a special kind of explanation called semifactuals. The research shows that some types of models are better than others when it comes to explaining their decisions. To make explanations even better, the paper introduces a system where users can customize what they want to see based on their own preferences. This makes AI more user-friendly and easier to understand.

Keywords

* Artificial intelligence  * Attention  * Machine learning