Summary of Missed Connections: Lateral Thinking Puzzles For Large Language Models, by Graham Todd et al.
Missed Connections: Lateral Thinking Puzzles for Large Language Models
by Graham Todd, Tim Merino, Sam Earle, Julian Togelius
First submitted to arxiv on: 17 Apr 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper explores the potential of artificial intelligence (AI) systems to play the popular puzzle game Connections, published by the New York Times. The game requires players to group 16 words into four categories that relate to common themes, demanding both linguistic knowledge and abstract thinking. Researchers investigate whether AI systems can play Connections and use it as a benchmark for measuring abstract reasoning and semantic information encoded in data-driven language models. They compare a sentence-embedding baseline with modern large language models (LLMs), reporting accuracy scores, analyzing the impact of chain-of-thought prompting, and discussing failure modes. The study finds that the Connections task is challenging yet feasible, serving as a strong test-bed for future work. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary AI systems can play the popular puzzle game Connections, published by the New York Times. This game requires players to group 16 words into four categories that relate to common themes. Researchers want to know if AI systems can do this too and if it’s a good way to measure how well they think abstractly. They test two kinds of AI models: one that uses sentences to understand language and modern large language models (LLMs). The study shows that these AI systems can play Connections, but some are better than others. It also helps us understand what makes them fail sometimes. |
Keywords
» Artificial intelligence » Embedding » Prompting