Summary of Diversity Drives Fairness: Ensemble Of Higher Order Mutants For Intersectional Fairness Of Machine Learning Software, by Zhenpeng Chen et al.
Diversity Drives Fairness: Ensemble of Higher Order Mutants for Intersectional Fairness of Machine Learning Software
by Zhenpeng Chen, Xinyue Li, Jie M. Zhang, Federica Sarro, Yang Liu
First submitted to arxiv on: 11 Dec 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Software Engineering (cs.SE)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel ensemble approach called FairHOME is introduced for enhancing intersectional fairness in Machine Learning software during inference. Inspired by social science theories on diversity, FairHOME generates mutants representing diverse subgroups for each input instance to broaden the perspectives and foster a fairer decision-making process. Unlike conventional ensemble methods, FairHOME combines predictions for the original input and its mutants all generated by the same ML model. This approach can even be applied to deployed ML software without requiring new models. Extensive evaluation is performed across 24 tasks using seven state-of-the-art fairness improvement methods and widely adopted metrics. FairHOME consistently outperforms existing methods, enhancing intersectional fairness by 47.5% on average. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary FairHOME is a new way to make machine learning more fair. It helps by looking at things from different perspectives. Imagine having friends with different ideas and experiences. That’s what FairHOME does for computer programs that make decisions. It makes sure the decision-making process takes into account many different viewpoints. This approach can even be used on already existing programs without needing to create new ones. In testing, FairHOME performed better than other methods in making fairer decisions. |
Keywords
» Artificial intelligence » Inference » Machine learning