Loading Now

Summary of Modular Learning Of Deep Causal Generative Models For High-dimensional Causal Inference, by Md Musfiqur Rahman et al.


Modular Learning of Deep Causal Generative Models for High-dimensional Causal Inference

by Md Musfiqur Rahman, Murat Kocaoglu

First submitted to arxiv on: 2 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Information Theory (cs.IT); Methodology (stat.ME); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes modular training of deep causal generative models to efficiently compute identifiable causal queries using high-dimensional data, such as images. The existing algorithms for this task assume accurate estimation of the data distribution, which is impractical for high-dimensional variables. In contrast, pre-trained deep generative architectures can sample from these distributions but require costly training. To address this, the authors develop Modular-DCM, an algorithm that uses adversarial training to learn network weights and utilizes large, pre-trained conditional generative models. The proposed method outperforms baselines on the Colored-MNIST dataset and demonstrates convergence and utility on the COVIDx and CelebA-HQ datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us understand how computers can better use big data to answer questions about cause-and-effect relationships. Right now, most algorithms for this task assume we know a lot about the data distribution, which isn’t possible when dealing with very high-dimensional data like images. The authors propose a new way to train deep learning models that not only makes training faster but also allows us to use pre-trained models to answer these questions.

Keywords

* Artificial intelligence  * Deep learning