Loading Now

Summary of Cyclic Data Parallelism For Efficient Parallelism Of Deep Neural Networks, by Louis Fournier (mlia) et al.


Cyclic Data Parallelism for Efficient Parallelism of Deep Neural Networks

by Louis Fournier, Edouard Oyallon

First submitted to arxiv on: 13 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC); Neural and Evolutionary Computing (cs.NE); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Cyclic Data Parallelism shifts the execution of micro-batches from simultaneous to sequential, allowing for constant total memory usage and balanced gradient communications during training. This paradigm combines with Model Parallelism to reduce GPU requirements by sharing GPUs across micro-batches. Within the ZeRO-DP framework, point-to-point operations replace collective broadcasts, demonstrating strength on CIFAR-10 and ImageNet datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
Cyclic Data Parallelism is a new way of doing things that helps large deep learning models work faster and more efficiently. It does this by processing small chunks of data one after the other instead of all at once. This makes it possible to use less memory and reduces the amount of communication needed during training. By combining this with Model Parallelism, we can even reduce the number of powerful GPUs required. The approach shows promise on popular image datasets like CIFAR-10 and ImageNet.

Keywords

* Artificial intelligence  * Deep learning