Summary of Depth Separation in Norm-bounded Infinite-width Neural Networks, by Suzanna Parkinson et al.
Depth Separation in Norm-Bounded Infinite-Width Neural Networksby Suzanna Parkinson, Greg Ongie, Rebecca Willett, Ohad Shamir,…
Depth Separation in Norm-Bounded Infinite-Width Neural Networksby Suzanna Parkinson, Greg Ongie, Rebecca Willett, Ohad Shamir,…
Uncertainty Quantification via Stable Distribution Propagationby Felix Petersen, Aashwin Mishra, Hilde Kuehne, Christian Borgelt, Oliver…
APALU: A Trainable, Adaptive Activation Function for Deep Learning Networksby Barathi Subramanian, Rathinaraja Jeyaraj, Rakhmonov…
Feed-Forward Neural Networks as a Mixed-Integer Programby Navid Aftabi, Nima Moradi, Fatemeh MahrooFirst submitted to…
Learn To be Efficient: Build Structured Sparsity in Large Language Modelsby Haizhong Zheng, Xiaoyan Bai,…
Loss Landscape of Shallow ReLU-like Neural Networks: Stationary Points, Saddle Escape, and Network Embeddingby Frank…
Fixed width treelike neural networks capacity analysis – generic activationsby Mihailo StojnicFirst submitted to arxiv…
Linearizing Models for Efficient yet Robust Private Inferenceby Sreetama Sarkar, Souvik Kundu, Peter A. BeerelFirst…
Analyzing the Neural Tangent Kernel of Periodically Activated Coordinate Networksby Hemanth Saratchandran, Shin-Fang Chng, Simon…
Convex Relaxations of ReLU Neural Networks Approximate Global Optima in Polynomial Timeby Sungyoon Kim, Mert…