Summary of Controllable Game Level Generation: Assessing the Effect Of Negative Examples in Gan Models, by Mahsa Bazzaz and Seth Cooper
Controllable Game Level Generation: Assessing the Effect of Negative Examples in GAN Models
by Mahsa Bazzaz, Seth Cooper
First submitted to arxiv on: 30 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper presents an evaluation of two controllable Generative Adversarial Network (GAN) variants, Conditional GAN (CGAN) and Rumi-GAN, for generating game levels with specific constraints such as playability and controllability. The CGAN and Rumi-GAN models are trained on a dataset of game levels with labels indicating the target conditions. The evaluation is conducted under two scenarios: one where the negative examples are included in the training process, and another where they are not. The results show that incorporating negative examples can help the GAN models avoid generating undesirable outputs. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper compares two ways to make game levels with Generative Adversarial Networks (GANs). The first way is called Conditional GAN (CGAN), which uses information about what makes a level good or bad. The second way is Rumi-GAN, which uses both good and bad examples to learn what makes a level great. The researchers tested these two ways to see if using bad examples helps the models make better levels. |
Keywords
» Artificial intelligence » Gan » Generative adversarial network