Loading Now

Summary of Decentralized Transformers with Centralized Aggregation Are Sample-efficient Multi-agent World Models, by Yang Zhang et al.


Decentralized Transformers with Centralized Aggregation are Sample-Efficient Multi-Agent World Models

by Yang Zhang, Chenjia Bai, Bin Zhao, Junchi Yan, Xiu Li, Xuelong Li

First submitted to arxiv on: 22 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Multiagent Systems (cs.MA)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel world model for Multi-Agent Reinforcement Learning (MARL) is proposed, which learns decentralized local dynamics to address scalability issues and centralized representation aggregation to tackle non-stationarity. This architecture leverages the Transformer’s auto-regressive sequence modeling capabilities to capture complex local dynamics across agents. The Perceiver Transformer is introduced as a solution for centralized representation aggregation in this context. Results on Starcraft Multi-Agent Challenge (SMAC) demonstrate improved sample efficiency and overall performance compared to model-free approaches and existing model-based methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
A world model for robots can help them learn better by letting them imagine different scenarios. However, when many robots are working together, it becomes harder to build a good world model. This is because the robots’ actions affect each other, making it difficult to predict what will happen in the future. To solve this problem, researchers created a new type of world model that combines decentralized and centralized approaches. They used a special kind of AI called Transformer to learn about the robots’ interactions and create accurate predictions. The results showed that their approach was better than others at learning and performing tasks.

Keywords

* Artificial intelligence  * Reinforcement learning  * Transformer