Summary of Detecting Hidden Triggers: Mapping Non-markov Reward Functions to Markov, by Gregory Hyde et al.
Detecting Hidden Triggers: Mapping Non-Markov Reward Functions to Markov
by Gregory Hyde, Eugene Santos Jr
First submitted to arxiv on: 20 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a framework for mapping non-Markovian reward functions into equivalent Markov ones by learning specialized reward automata, Reward Machines. Unlike traditional approaches, this method learns hidden triggers directly from data to construct propositional symbols. The authors demonstrate the importance of learning Reward Machines over their Deterministic Finite-State Automata counterparts in modeling reward dependencies. They formalize this distinction in their learning objective and construct a mapping process as an Integer Linear Programming problem. The paper proves that these mappings serve as a suitable proxy for maximizing reward expectations and empirically validates its approach on non-Markovian reward functions in the Officeworld domain, with promising results also in the Breakfastworld domain. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper talks about how we can make machines learn from rewards that aren’t always straightforward. They propose a new way to convert complex reward systems into ones that are easier to understand and work with. This approach uses special machines called Reward Machines, which learn hidden patterns in the data to help predict what’s going to happen next. The authors show that using these Reward Machines can lead to better results than traditional methods and test their approach on two different scenarios: one about an office environment and another about a kitchen. |