Summary of Kernel Metric Learning For In-sample Off-policy Evaluation Of Deterministic Rl Policies, by Haanvid Lee et al.
Kernel Metric Learning for In-Sample Off-Policy Evaluation of Deterministic RL Policies
by Haanvid Lee, Tri Wahyu Guntara, Jongmin Lee, Yung-Kyun Noh, Kee-Eung Kim
First submitted to arxiv on: 29 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper explores off-policy evaluation (OPE) of deterministic target policies in reinforcement learning (RL) for continuous action spaces. Current methods like importance sampling are problematic when the behavior policy deviates from the target policy, leading to high variance. To address this issue, recent works propose in-sample learning with importance resampling. However, these approaches are not applicable to deterministic target policies. The authors relax the deterministic target policy using a kernel and learn the kernel metrics that minimize the mean squared error of the estimated temporal difference update vector of an action value function. They derive the bias and variance of the estimation error due to this relaxation and provide analytic solutions for the optimal kernel metric. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how to evaluate policies in situations where the actions we take might not always be the same as what we want to happen. It’s like trying to figure out if a new policy is working well, but you can’t just test it directly because that would change things. Current ways of doing this don’t work so well when there are many different possible actions. The authors came up with a new way to relax the rules of the target policy and find the best combination of metrics to use for evaluation. |
Keywords
» Artificial intelligence » Reinforcement learning