Summary of Sparser, Better, Deeper, Stronger: Improving Sparse Training with Exact Orthogonal Initialization, by Aleksandra Irena Nowak et al.
Sparser, Better, Deeper, Stronger: Improving Sparse Training with Exact Orthogonal Initialization
by Aleksandra Irena Nowak, Łukasz Gniecki, Filip Szatkowski, Jacek Tabor
First submitted to arxiv on: 3 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper presents a novel approach to static sparse training, where the authors propose a new initialization method called Exact Orthogonal Initialization (EOI). The EOI scheme is based on composing random Givens rotations and provides exact orthogonality, enabling the creation of layers with arbitrary densities. The authors demonstrate the superiority of EOI through experiments, showing consistent outperformance of common sparse initialization techniques. This method enables training highly sparse 1000-layer MLP and CNN networks without residual connections or normalization techniques. The results highlight the crucial role of weight initialization in static sparse training alongside sparse mask selection. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Static sparse training allows for remarkable results, but existing methods may not efficiently leverage the potential impact on optimization. A novel approach is proposed to introduce orthogonality in the sparse sub-network, which helps stabilize the gradient signal. The method provides exact orthogonality and enables layers with arbitrary densities. This paper shows that a new initialization method can lead to better results and more efficient training. |
Keywords
» Artificial intelligence » Cnn » Mask » Optimization