Summary of What Are Large Language Models Mapping to in the Brain? a Case Against Over-reliance on Brain Scores, by Ebrahim Feghhi et al.
What Are Large Language Models Mapping to in the Brain? A Case Against Over-Reliance on Brain Scores
by Ebrahim Feghhi, Nima Hadidi, Bryan Song, Idan A. Blank, Jonathan C. Kao
First submitted to arxiv on: 3 Jun 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The abstract discusses the similarities between large language models (LLMs) and the human brain. It challenges the assumption that LLMs’ internal representations reflect core elements of language processing by analyzing three neural datasets, including an fMRI dataset where participants read short passages. The analysis reveals that a trivial feature encoding temporal autocorrelation outperforms LLMs when using shuffled train-test splits, but is replaced by sentence length and position features in contiguous splits. Additionally, the brain scores of untrained LLMs are explained by simple features, while trained LLMs’ brain scores can be mostly attributed to sentence characteristics and word embeddings. The study emphasizes the importance of deconstructing what LLMs are mapping to in neural signals to avoid over-interpretations of similarity between LLMs and brains. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how well large language models (LLMs) understand human brain activity. It’s like comparing apples to oranges – do LLMs really think the way humans do? The researchers studied three big datasets that were used in a previous study about LLMs and brains. They found some surprising things! For example, a simple feature that looks at how much language patterns change over time is actually better than what LLMs can do. But when they looked at it differently, the simple features weren’t as good anymore. The researchers also found out that untrained LLMs don’t really understand brain activity very well – it’s all based on simple things like sentence length and where the sentences are in a passage. They think this means we shouldn’t assume that LLMs are as smart as humans just because they can do some tasks that humans can do too. |