Loading Now

Summary of Acceleration For Deep Reinforcement Learning Using Parallel and Distributed Computing: a Survey, by Zhihong Liu et al.


Acceleration for Deep Reinforcement Learning using Parallel and Distributed Computing: A Survey

by Zhihong Liu, Xin Xu, Peng Qiao, Dongsheng Li

First submitted to arxiv on: 8 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the training acceleration methodologies for deep reinforcement learning using parallel and distributed computing. The authors provide a comprehensive survey of state-of-the-art methods and core references in this field. They also discuss emerging topics and open issues, including learning system architectures, simulation parallelism, computing parallelism, distributed synchronization mechanisms, and deep evolutionary reinforcement learning.
Low GrooveSquid.com (original content) Low Difficulty Summary
In simple terms, this paper is about making it faster to train artificial intelligence models using many computers at the same time. This is important because training these models can take a very long time, even with powerful computers. The authors look at different ways to speed up the process and discuss what works best for each approach. They also compare 16 open-source libraries that people use to develop their own AI projects.

Keywords

* Artificial intelligence  * Reinforcement learning