Loading Now

Summary of Sf-dqn: Provable Knowledge Transfer Using Successor Feature For Deep Reinforcement Learning, by Shuai Zhang et al.


SF-DQN: Provable Knowledge Transfer using Successor Feature for Deep Reinforcement Learning

by Shuai Zhang, Heshan Devaka Fernando, Miao Liu, Keerthiram Murugesan, Songtao Lu, Pin-Yu Chen, Tianyi Chen, Meng Wang

First submitted to arxiv on: 24 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper studies transfer reinforcement learning (RL) problems where multiple RL problems have different reward functions but share the same underlying transition dynamics. The authors decompose the Q-function into a successor feature (SF) and a reward mapping, which reduces sample complexity and exhibits promising empirical performance compared to traditional RL methods like Q-learning. However, theoretical foundations remain unestablished, especially when learning SFs using deep neural networks (DQN). This paper establishes provable knowledge transfer using SF-DQN in transfer RL problems, showing that it outperforms conventional RL approaches.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about using artificial intelligence to help robots and computers learn new tasks by copying what they’ve already learned. It’s like how humans can learn a new language by listening to someone speak it. The authors take a complex problem called “transfer reinforcement learning” and break it down into smaller parts, making it easier for machines to solve. They show that this approach is better than older methods at teaching machines new skills.

Keywords

* Artificial intelligence  * Reinforcement learning