Summary of Preserving Near-optimal Gradient Sparsification Cost For Scalable Distributed Deep Learning, by Daegun Yoon et al.
Preserving Near-Optimal Gradient Sparsification Cost for Scalable Distributed Deep Learningby Daegun Yoon, Sangyoon OhFirst submitted…