Summary of Neufair: Neural Network Fairness Repair with Dropout, by Vishnu Asutosh Dasu et al.
NeuFair: Neural Network Fairness Repair with Dropout
by Vishnu Asutosh Dasu, Ashish Kumar, Saeid Tizpaz-Niari, Gang Tan
First submitted to arxiv on: 5 Jul 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Software Engineering (cs.SE)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel approach to mitigate post-processing biases in deep neural networks (DNNs) using neuron dropout as a post-processing technique. The authors posit that existing methods may not be sufficient, and instead suggest leveraging the dropout mechanism during inference to improve fairness. They introduce NeuFair, a family of randomized algorithms designed to minimize discrimination while maintaining model performance. The paper demonstrates the effectiveness of NeuFair in improving fairness (up to 69%) with minimal or no performance degradation. It also explores the influence of hyperparameters on results and compares NeuFair to state-of-the-art bias mitigators. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research looks at how to make sure deep learning models are fair and don’t accidentally perpetuate biases. The authors think that a technique called “neuron dropout” might be useful for this. They propose a new approach, called NeuFair, which uses random algorithms to drop neurons during model testing. This helps reduce unfairness in the predictions while still keeping the overall performance good. The paper shows that NeuFair can work well (up to 69%) and doesn’t make the model worse. It also explains how different settings affect the results. |
Keywords
» Artificial intelligence » Deep learning » Dropout » Inference