Loading Now

Summary of Black Boxes and Looking Glasses: Multilevel Symmetries, Reflection Planes, and Convex Optimization in Deep Networks, by Emi Zeger and Mert Pilanci


Black Boxes and Looking Glasses: Multilevel Symmetries, Reflection Planes, and Convex Optimization in Deep Networks

by Emi Zeger, Mert Pilanci

First submitted to arxiv on: 5 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
We present a novel framework for understanding deep neural network (DNN) behavior, showing that training DNNs with absolute value activation and arbitrary input dimension can be formulated as equivalent convex Lasso problems. This reformulation reveals geometric structures encoding symmetry in neural networks, formally proving a distinction between deep and shallow networks: deeper networks favor symmetric structures, enabling multilevel symmetries. Our approach also highlights the role of reflection hyperplanes spanned by training data, which are orthogonal to optimal weight vectors. Numerical experiments support our theoretical findings, demonstrating theoretically predicted features when training networks using Large Language Model embeddings.
Low GrooveSquid.com (original content) Low Difficulty Summary
Researchers have discovered a new way to understand how deep neural networks work. They found that these networks can be treated like mathematical problems called Lasso problems. This helps us see patterns and symmetries in the networks, which is important because it shows why deeper networks are better at certain tasks than shallower ones. The team also found that the networks use special lines to help them learn from data, and their experiments confirmed these findings.

Keywords

» Artificial intelligence  » Large language model  » Neural network