Loading Now

Summary of Streetwise Agents: Empowering Offline Rl Policies to Outsmart Exogenous Stochastic Disturbances in Rtc, by Aditya Soni et al.


Streetwise Agents: Empowering Offline RL Policies to Outsmart Exogenous Stochastic Disturbances in RTC

by Aditya Soni, Mayukh Das, Anjaly Parayil, Supriyo Ghosh, Shivam Shandilya, Ching-An Cheng, Vishak Gopal, Sami Khairy, Gabriel Mittag, Yasaman Hosseinkashi, Chetan Bansal

First submitted to arxiv on: 11 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel approach to addressing the problem of online data/feedback-driven decision making, which is limited by the difficulty of exploring and training on real production systems. The authors develop a method for offline reinforcement learning from limited trajectory samples, but also recognize that such policies can fail after deployment due to exogenous factors that alter the transition distribution. To solve this issue, they introduce a novel post-deployment shaping of policies (Streetwise) that conditions on real-time characterization of out-of-distribution sub-spaces, leading to robust actions in bandwidth estimation (BWE) and other tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper explores how to make decisions in real-time online data/feedback-driven decision making by using offline reinforcement learning from limited trajectory samples. The approach helps solve the problem of critical policy failures and generalization errors that can occur when there are changes or disturbances in the environment. The authors show that their method, called Streetwise, can improve final returns by about 18% compared to state-of-the-art baselines.

Keywords

» Artificial intelligence  » Generalization  » Reinforcement learning