Summary of Semtra: a Semantic Skill Translator For Cross-domain Zero-shot Policy Adaptation, by Sangwoo Shin et al.
SemTra: A Semantic Skill Translator for Cross-Domain Zero-Shot Policy Adaptation
by Sangwoo Shin, Minjong Yoo, Jeongwoo Lee, Honguk Woo
First submitted to arxiv on: 12 Feb 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper introduces SemTra, a semantic skill translator framework that enables zero-shot adaptation of expert behavior patterns across different domains. The framework utilizes multi-modal models to extract skills from user inputs in interleaved snippets, and then adapts these skills using a pretrained language model’s reasoning capabilities. This hierarchical adaptation process includes task adaptation, which transforms the extracted skills into a semantic sequence tailored to the target domain, and skill adaptation, which optimizes each skill for the context through parametric instantiations and contrastive learning-based inferences. The framework is evaluated on various environments, including Meta-World, Franka Kitchen, RLBench, and CARLA, demonstrating its ability to perform long-horizon tasks and adapt to new domains with zero-shot learning. This technology has potential applications in cognitive robots and autonomous vehicles. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps computers learn new skills without needing practice data for each skill. It’s like a translator that can take what someone says and turn it into an action plan for a robot or car. The researchers created a special system called SemTra that uses a combination of computer vision, natural language processing, and machine learning to understand instructions and adapt to different situations. This means robots could follow complex instructions without needing prior training, and cars could adjust their behavior in response to changing road conditions. |
Keywords
» Artificial intelligence » Language model » Machine learning » Multi modal » Natural language processing » Zero shot