Summary of Predictive Representations: Building Blocks Of Intelligence, by Wilka Carvalho et al.
Predictive representations: building blocks of intelligence
by Wilka Carvalho, Momchil S. Tomov, William de Cothi, Caswell Barry, Samuel J. Gershman
First submitted to arxiv on: 9 Feb 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper integrates theoretical ideas from reinforcement learning with cognition and neuroscience to develop a better understanding of how our brains predict future events. Specifically, it explores the successor representation (SR) and its generalizations, which have been used both as engineering tools and models of brain function. The study suggests that certain predictive representations may serve as versatile building blocks of intelligence. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how we can use computers to learn from experiences and predict what will happen next. It brings together ideas from computer science and neuroscience to understand how our brains work. The researchers are interested in something called the successor representation, which helps us make decisions based on past events. They think that this idea could be used to create more intelligent machines. |
Keywords
* Artificial intelligence * Reinforcement learning