Loading Now

Summary of Comparing Abstraction in Humans and Large Language Models Using Multimodal Serial Reproduction, by Sreejan Kumar et al.


Comparing Abstraction in Humans and Large Language Models Using Multimodal Serial Reproduction

by Sreejan Kumar, Raja Marjieh, Byron Zhang, Declan Campbell, Michael Y. Hu, Umang Bhatt, Brenden Lake, Thomas L. Griffiths

First submitted to arxiv on: 6 Feb 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Computation and Language (cs.CL); Neurons and Cognition (q-bio.NC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates how language affects the formation of abstractions by humans, using a novel multimodal serial reproduction framework that combines visual and linguistic stimuli. The researchers compare unimodal and multimodal chains in both human and GPT-4 participants, finding that adding language as a modality has a larger effect on human reproductions than GPT-4’s. This suggests that human visual and linguistic representations are more dissociable than those of GPT-4.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study looks at how people make sense of the world by sharing information with each other. They use a game-like experiment to see how our brains work when we communicate using different senses, like seeing or hearing. The researchers compared what happens when humans and a language model called GPT-4 share information in different ways. They found that when humans communicate using both visual and linguistic cues, it changes how they think about the world more than when a language model does the same.

Keywords

» Artificial intelligence  » Gpt  » Language model