Summary of Identifying Bias in Deep Neural Networks Using Image Transforms, by Sai Teja Erukude et al.
Identifying Bias in Deep Neural Networks Using Image Transforms
by Sai Teja Erukude, Akhil Joshi, Lior Shamir
First submitted to arxiv on: 17 Dec 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper focuses on mitigating biases in Convolutional Neural Networks (CNNs) when evaluating their performance using benchmark datasets. CNNs are often considered “black boxes” because users cannot directly observe how they analyze image data, relying instead on empirical evaluation to test efficacy. However, this can lead to hidden biases affecting performance evaluations. The authors identify such biases in common benchmark datasets and propose techniques for detecting dataset biases that impact standard performance metrics. One method involves classifying images using only blank background parts, but this is not always possible. To overcome this, the authors introduce a new approach applying various image transforms (Fourier, wavelet, median filter, and combinations) to recover bias information CNNs use to classify images. These transforms affect contextual visual information differently than systemic background biases, enabling the detection of background bias without separating sub-image parts from blank backgrounds. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making sure artificial intelligence (AI) systems like CNNs are fair and don’t make mistakes because of hidden flaws. AI systems are “black boxes” meaning we can’t see how they work, so we have to test them by showing them lots of examples. But sometimes these tests aren’t perfect, which can lead to biases or unfair results. The authors looked at some common testing datasets and found that they contain these biases. They also came up with a way to detect when this is happening, even if we don’t have the original images. This new method applies different filters to the images to see what’s really going on. |