Summary of The Indoor-training Effect: Unexpected Gains From Distribution Shifts in the Transition Function, by Serena Bono et al.
The Indoor-Training Effect: unexpected gains from distribution shifts in the transition function
by Serena Bono, Spandan Madan, Ishaan Grover, Mao Yasueda, Cynthia Breazeal, Hanspeter Pfister, Gabriel Kreiman
First submitted to arxiv on: 29 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The research investigates whether agents in reinforcement learning problems can benefit from training in a noise-free environment and testing in a noisy one. The study proposes Noise Injection, a method to generate new Markov Decision Processes (MDPs) by adding parametric noise into the transition function. This allows for quantitative control over the level of noise between environments. Contrary to conventional wisdom, agents can perform better when trained on the original environment and tested on the noisy variations, demonstrating the Indoor-Training Effect. The phenomenon is observed across 60 different ATARI game variations, including PacMan, Pong, and Breakout. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Imagine playing a video game in a quiet room versus a loud outdoor space. The researchers wanted to know if agents (like computer programs) can learn better by training in one environment and testing in another. They developed a way to create new “noisy” environments by adding controlled amounts of noise to the original game. Surprisingly, they found that agents performed better when trained in a quiet environment and tested in a noisy one. This effect was seen across many different games, including PacMan, Pong, and Breakout. |
Keywords
* Artificial intelligence * Reinforcement learning