Loading Now

Summary of Approximating Discrimination Within Models When Faced with Several Non-binary Sensitive Attributes, by Yijun Bian et al.


Approximating Discrimination Within Models When Faced With Several Non-Binary Sensitive Attributes

by Yijun Bian, Yujie Luo, Ping Xu

First submitted to arxiv on: 12 Aug 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computers and Society (cs.CY)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents a novel approach to evaluating the discrimination level within machine learning models, particularly in scenarios where multiple sensitive attributes interact with each other. The proposed “harmonic fairness measure via manifolds” (HFM) is designed to capture fine-grained discrimination levels for multiple sensitive attributes of various values. To accelerate computation, two approximation algorithms, ApproxDist and ExtendDist, are introduced to evaluate bias in single-attribute and multi-attribute settings, respectively. Empirical results demonstrate the effectiveness and efficiency of these algorithms, highlighting the importance of HFM in mitigating discrimination within ML models.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine you’re trying to make sure a computer program is fair. This means it shouldn’t treat people differently based on things like their race or gender. But what if there are multiple factors that affect how fair the program is? For example, if a person’s gender and age both matter. Researchers have developed a new way to measure fairness in these complex situations. They’ve also created two tools to make it easier to calculate this fairness. By using these tools, we can create more accurate and fair computer programs. This is important because we want our computers to be able to make good decisions without being biased towards certain groups of people.

Keywords

» Artificial intelligence  » Machine learning