Summary of Understanding Bias in Large-scale Visual Datasets, by Boya Zeng et al.
Understanding Bias in Large-Scale Visual Datasets
by Boya Zeng, Yida Yin, Zhuang Liu
First submitted to arxiv on: 2 Dec 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A recent study revealed that large-scale visual datasets can be easily classified by modern neural networks due to bias. To better understand this bias, our proposed framework extracts various types of information (semantic, structural, boundary, color, and frequency) from these datasets and assesses their relationship to bias. We also decompose semantic bias using object-level analysis and leverage natural language methods to generate detailed descriptions of each dataset’s characteristics. Our goal is to help researchers understand bias in existing large-scale pre-training datasets and build more diverse and representative ones. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Large-scale visual datasets are biased, making them easy to classify by modern neural networks. But what exactly makes these datasets biased? We want to find out! To do this, we’re going to extract different types of information from these datasets and see how they relate to bias. We’ll also look at each dataset’s characteristics in detail. Our goal is to help researchers make better datasets that are more representative. |