Loading Now

Summary of Mutatis Mutandis: Revisiting the Comparator in Discrimination Testing, by Jose M. Alvarez et al.


Mutatis Mutandis: Revisiting the Comparator in Discrimination Testing

by Jose M. Alvarez, Salvatore Ruggieri

First submitted to arxiv on: 22 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper revisits the role of the comparator in discrimination testing, arguing that it has a causal modeling nature. The authors introduce two kinds of classification for the comparator: ceteris paribus (CP) and mutatis mutandis (MM). The CP comparator aims to have an idealized comparison by only differing on membership to the protected attribute. In contrast, the MM comparator represents what would have been if the complainant didn’t have the effects of the protected attribute on non-protected attributes. The authors illustrate these two comparators and their impact on discrimination testing using a real-world example. They position generative models and machine learning methods as useful tools for constructing the MM comparator, enabling more complex and realistic comparisons when testing for discrimination.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about how to test if someone is being treated unfairly because of who they are. It’s like trying to find out if a person is getting a different treatment just because of their race or gender. To do this, we need to create two profiles: one that shows what would happen if the person didn’t have certain characteristics, and another that shows what actually happens. The authors look at how we can make these comparisons fairer by using special kinds of models.

Keywords

» Artificial intelligence  » Classification  » Machine learning