Summary of Identifying Implicit Social Biases in Vision-language Models, by Kimia Hamidieh et al.
Identifying Implicit Social Biases in Vision-Language Models
by Kimia Hamidieh, Haoran Zhang, Walter Gerych, Thomas Hartvigsen, Marzyeh Ghassemi
First submitted to arxiv on: 1 Nov 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Computers and Society (cs.CY)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper investigates the presence of social biases in Contrastive Language Image Pretraining (CLIP) models, which are widely used for multimodal retrieval tasks. The authors propose a taxonomy called So-B-IT to categorize 374 words across ten types of bias that can lead to societal harm if associated with specific demographic groups. Using this taxonomy, the researchers examine images retrieved by CLIP from a facial image dataset and find undesirable associations between harmful words and certain demographic groups. For example, when asked to retrieve “terrorist” images, CLIP mostly returns pictures of Middle Eastern men. The study also analyzes the source of these biases, revealing that they are present in large image-text datasets used to train CLIP models. This highlights the need for evaluating and addressing bias in vision-language models, as well as promoting transparency and fairness-aware curation of pre-training datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about how certain AI models can learn bad habits from the data they’re trained on. Specifically, it looks at a type of model called CLIP that helps find images based on text descriptions. The researchers found that these models often make unfair connections between words and pictures, like linking “terrorist” to pictures of Middle Eastern men. They also figured out where these biases come from – in the training data itself! This shows how important it is to check for biases in AI models and make sure they’re not being taught to be unfair. |
Keywords
» Artificial intelligence » Pretraining