Summary of Vision-language Models Provide Promptable Representations For Reinforcement Learning, by William Chen and Oier Mees and Aviral Kumar and Sergey Levine
Vision-Language Models Provide Promptable Representations for Reinforcement Learning
by William Chen, Oier Mees, Aviral Kumar, Sergey Levine
First submitted to arxiv on: 5 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel approach to reinforcement learning (RL) that leverages the vast amounts of world knowledge encoded in vision-language models (VLMs). By initializing policies with VLMs as promptable representations, the authors demonstrate improved performance on visually-complex RL tasks in Minecraft and robot navigation in Habitat. The approach outperforms equivalent policies trained on generic image embeddings and instruction-following methods, while producing representations of common-sense semantic reasoning that improve policy performance in novel scenes. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps robots learn new behaviors by using big computers called vision-language models (VLMs). VLMs know lots about the world from looking at internet data. The authors use these VLMs to help robots make better decisions in complex environments, like building with blocks or navigating a maze. They show that this approach is better than other ways of teaching robots new skills and can even help them understand things they haven’t seen before. |
Keywords
* Artificial intelligence * Reinforcement learning