Loading Now

Summary of Interpretable Modeling Of Deep Reinforcement Learning Driven Scheduling, by Boyang Li et al.


Interpretable Modeling of Deep Reinforcement Learning Driven Scheduling

by Boyang Li, Zhiling Lan, Michael E. Papka

First submitted to arxiv on: 24 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel framework called Interpretable Reinforcement Learning (IRL) is proposed for deep reinforcement learning-based cluster scheduling, which has shown promising results in high-performance computing (HPC). The primary challenge lies in the lack of interpretability in deep neural networks (DNN), making them black-box models for system managers. IRL addresses this issue by interpreting DNN as a decision tree using imitation learning, leveraging the Dataset Aggregation (DAgger) algorithm and critical state notion to prune the derived decision tree. Experimental results demonstrate that IRL successfully converts a black-box DNN policy into an interpretable rule-based decision tree while maintaining comparable scheduling performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
In this research, scientists developed a new way to make complex computer systems more understandable. They used deep learning, which is like a super smart AI, to schedule tasks on many computers at the same time. The problem is that these AIs are really hard to understand and can’t be trusted. So, they created a new method called IRL (Interpretable Reinforcement Learning) that makes these AIs more transparent by turning them into simple decision trees. This means humans can finally understand what the AI is doing and why it made certain decisions. The results show that this new method works just as well as the original AI, but now we can actually understand how it’s working.

Keywords

* Artificial intelligence  * Decision tree  * Deep learning  * Reinforcement learning