Loading Now

Summary of Frequency Explains the Inverse Correlation Of Large Language Models’ Size, Training Data Amount, and Surprisal’s Fit to Reading Times, by Byung-doh Oh et al.


Frequency Explains the Inverse Correlation of Large Language Models’ Size, Training Data Amount, and Surprisal’s Fit to Reading Times

by Byung-Doh Oh, Shisen Yue, William Schuler

First submitted to arxiv on: 3 Feb 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The current work explores the relationship between the size of transformer-based language models and their ability to accurately estimate naturalistic human reading times. The analysis reveals that word frequency is a key factor in this trend, with larger models becoming increasingly accurate at predicting rare words but struggling with common ones. This leads to a mismatch between model predictions and human reading times, particularly for low-frequency words. The study also examines the training dynamics of different model variants and finds that larger models are better at learning complex associations for rare words.
Low GrooveSquid.com (original content) Low Difficulty Summary
Transformer-based language models have gotten bigger and smarter, but they’re not getting better at predicting how long it takes humans to read text. Researchers found that smaller models do worse with rare words, while bigger models do better, but struggle with common ones. This makes their predictions less like what we read naturally. The study looked at how different models learn during training and saw that larger models are really good at figuring out complex connections for rare words.

Keywords

* Artificial intelligence  * Transformer