Loading Now

Summary of Decision Transformer Vs. Decision Mamba: Analysing the Complexity Of Sequential Decision Making in Atari Games, by Ke Yan


Decision Transformer vs. Decision Mamba: Analysing the Complexity of Sequential Decision Making in Atari Games

by Ke Yan

First submitted to arxiv on: 1 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper analyzes the disparity in performance between Decision Transformer (DT) and Decision Mamba (DM) in sequence modeling reinforcement learning tasks for different Atari games. The study finds that DM generally outperforms DT in simpler games, while DT performs better in more complicated ones. To understand these differences, the researchers expanded the number of games to 12 and analyzed various game characteristics, including action space complexity, visual complexity, average trajectory length, and average steps to the first non-zero reward. The results show that the performance gap between DT and DM is affected by multiple factors, with action space complexity and visual complexity being primary determining factors. DM excels in environments with simple actions and visuals, while DT shows an advantage in games with higher complexity.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper compares two AI models, Decision Transformer (DT) and Decision Mamba (DM), to see how well they do in different video games. The results show that one model does better in simpler games, while the other does better in harder ones. To figure out why this is happening, the researchers looked at many things about each game, like how complex the actions are and how much stuff is happening on screen. They found that the difference between the two models comes from these factors. The paper helps us understand why AI models do well or poorly in different situations, which could help make better AI models for future use.

Keywords

» Artificial intelligence  » Reinforcement learning  » Transformer