Loading Now

Summary of Low-rank Learning by Design: the Role Of Network Architecture and Activation Linearity in Gradient Rank Collapse, By Bradley T. Baker et al.


Low-Rank Learning by Design: the Role of Network Architecture and Activation Linearity in Gradient Rank Collapse

by Bradley T. Baker, Barak A. Pearlmutter, Robyn Miller, Vince D. Calhoun, Sergey M. Plis

First submitted to arxiv on: 9 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper delves into the learning dynamics of deep neural networks (DNNs), exploring the role of geometric constraints in learning. It builds upon recent research on “Neural Collapse” and examines how architectural choices and data structure affect gradient rank bounds in fully-connected, recurrent, and convolutional neural networks. The authors provide theoretical analysis and empirical demonstrations of how design decisions like activation function linearity, bottleneck layer introduction, and sequence truncation influence these bounds.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper looks at how deep neural networks learn and grow. It starts by understanding what happens when a network is trained really well, and then it explores the idea that even before this point, the way gradients work in a network can be influenced by the architecture and the kind of data it’s learning from. The authors do some math to figure out how these things affect each other, and they also test their ideas with real networks.

Keywords

* Artificial intelligence