Loading Now

Summary of How Well Do Deep Learning Models Capture Human Concepts? the Case Of the Typicality Effect, by Siddhartha K. Vemuri et al.


How Well Do Deep Learning Models Capture Human Concepts? The Case of the Typicality Effect

by Siddhartha K. Vemuri, Raj Sanjay Shah, Sashank Varma

First submitted to arxiv on: 25 May 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Machine learning educators familiar with technical audiences that are not specialized in the paper’s subfield can generate a medium-difficulty summary of this abstract. The study evaluates how well concept representations learned by deep learning models align with those of humans, specifically examining the typicality effect. Researchers focused on single-modality models and found modest correlations with human ratings. This study expands the evaluation to broader language (N = 8) and vision (N = 10) model architectures, also assessing combined predictions from vision + language model pairs and a multimodal CLIP-based model. Key findings include that language models better align with human typicality judgments than vision models, and combined language and vision models outperform individual models in predicting human typicality data. Multimodal models show promise for explaining human typicality judgments. The study advances the state-of-the-art in aligning conceptual representations of ML models and humans.
Low GrooveSquid.com (original content) Low Difficulty Summary
How do machine learning models learn about concepts like “bird” or “car”? This study looks at how well these models understand what makes some examples more typical than others, like how a robin is a more typical bird than a penguin. The researchers tested many different types of language and vision models to see which ones are best at understanding this concept. They found that language models are better at getting it right than visual models, and when they combine language and visual models together, they do even better. This study helps us understand how we can make machine learning models work more like our brains.

Keywords

» Artificial intelligence  » Deep learning  » Language model  » Machine learning