Summary of Towards Non-adversarial Algorithmic Recourse, by Tobias Leemann et al.
Towards Non-Adversarial Algorithmic Recourse
by Tobias Leemann, Martin Pawelczyk, Bardh Prenkaj, Gjergji Kasneci
First submitted to arxiv on: 15 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The research on adversarial examples and counterfactual explanations has been separate streams until now. Recent works have tried to understand their similarities and differences. A key difference is that adversarial examples cause misclassification compared to the truth, but existing methods for generating these explanations and adversarial examples don’t align with this requirement. Our paper introduces non-adversarial algorithmic recourse and shows why high-stakes situations require counterfactual explanations without adversarial characteristics. We investigate how objective function components, like machine learning models or distance measurements, determine whether the outcome is adversarial. Our experiments on common datasets show that these design choices are critical in deciding if recourse is non-adversarial. Additionally, using robust and accurate machine learning models results in less adversarial recourse desired in practice. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Researchers have been studying two related topics: how to explain decisions made by machines and how to trick those machines into making mistakes. A key difference between the two is that when machines make mistakes, it’s often because of intentional tricks rather than just being wrong. Most methods for generating these explanations or tricks don’t follow this requirement. Our paper introduces a new way to generate explanations without using tricks. We show why high-stakes situations need these explanations and how different choices can affect the outcome. By testing our approach on common datasets, we found that making good design choices is crucial in creating non-tricky recourse. |
Keywords
* Artificial intelligence * Machine learning * Objective function