Summary of The Disparate Benefits Of Deep Ensembles, by Kajetan Schweighofer et al.
The Disparate Benefits of Deep Ensembles
by Kajetan Schweighofer, Adrian Arnaiz-Rodriguez, Sepp Hochreiter, Nuria Oliver
First submitted to arxiv on: 17 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates how ensembles of deep neural networks, known as Deep Ensembles, affect algorithmic fairness. Algorithmic fairness assesses how a model’s performance varies across different groups defined by attributes like age, gender, or race. The researchers find that Deep Ensembles can lead to disparate benefits, favoring certain groups over others in terms of predictive performance and fairness metrics like statistical parity and equal opportunity. They identify the per-group difference in predictive diversity as the potential cause of this effect. To mitigate unfairness, the paper proposes post-processing methods while preserving the improved performance of Deep Ensembles. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This study looks at how combining many deep learning models (called Deep Ensembles) affects its fairness when it’s used to make decisions about different groups of people. Right now, we don’t fully understand how this works. The researchers found that using Deep Ensembles can help some groups more than others, which isn’t fair. They think this happens because each group has a different range of predictions from the models. To fix this unfairness, they suggest adjusting the results after they’re calculated, while still keeping the benefits of using many models. |
Keywords
* Artificial intelligence * Deep learning