Summary of Decoupling Exploration and Exploitation For Unsupervised Pre-training with Successor Features, by Jaeyoon Kim et al.
Decoupling Exploration and Exploitation for Unsupervised Pre-training with Successor Features
by JaeYoon Kim, Junyu Xuan, Christy Liang, Farookh Hussain
First submitted to arxiv on: 4 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed Non-Monolithic unsupervised Pre-training with Successor features (NMPS) model, which decomposes exploitation and exploration using successor features, improves upon existing methods that struggle with local optima. The NMPS approach utilizes separate agents for each purpose, leveraging the inherent characteristics of SFs such as quick adaptation to new tasks, and task-agnostic capabilities. This novel unsupervised pre-training model outperforms Active Pre-training with Successor Features (APS) in a comparative experiment. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary A team of researchers developed a new way to train artificial intelligence agents without labeled data. They used “successor features” which helps the agent learn about the environment and rewards. This approach allows for better fine-tuning on specific tasks. The team’s model, called NMPS, does a better job than previous methods in exploring and learning from environments. |
Keywords
* Artificial intelligence * Fine tuning * Unsupervised