Loading Now

Summary of Foundation Policies with Hilbert Representations, by Seohong Park et al.


Foundation Policies with Hilbert Representations

by Seohong Park, Tobias Kreiman, Sergey Levine

First submitted to arxiv on: 23 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Robotics (cs.RO)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed framework pre-trains generalist policies that capture diverse, optimal behaviors from unlabeled offline data. By learning a structured representation and then spanning this latent space with directional movements, the approach enables zero-shot policy “prompting” schemes for downstream tasks. The method is demonstrated on simulated robotic locomotion and manipulation benchmarks, achieving superior results compared to prior methods in some settings.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper proposes a new way to train AI models that can learn from large amounts of data without being told what to do. This allows the model to discover new skills and behaviors that it can use to solve different tasks. The approach is tested on robot simulations and shows promising results, even beating existing methods in some cases.

Keywords

* Artificial intelligence  * Latent space  * Prompting  * Zero shot