Summary of The Sampling Complexity Of Learning Invertible Residual Neural Networks, by Yuanyuan Li et al.
The sampling complexity of learning invertible residual neural networksby Yuanyuan Li, Philipp Grohs, Philipp PetersenFirst…
The sampling complexity of learning invertible residual neural networksby Yuanyuan Li, Philipp Grohs, Philipp PetersenFirst…
Theoretically informed selection of latent activation in autoencoder based recommender systemsby Aviad SusmanFirst submitted to…
Generalization and Risk Bounds for Recurrent Neural Networksby Xuewei Cheng, Ke Huang, Shujie MaFirst submitted…
A Convex Relaxation Approach to Generalization Analysis for Parallel Positively Homogeneous Networksby Uday Kiran Reddy…
Sparsing Law: Towards Large Language Models with Greater Activation Sparsityby Yuqi Luo, Chenyang Song, Xu…
Theoretical characterisation of the Gauss-Newton conditioning in Neural Networksby Jim Zhao, Sidak Pal Singh, Aurelien…
Nonparametric estimation of Hawkes processes with RKHSsby Anna Bonnet, Maxime SangnierFirst submitted to arxiv on:…
Convex Formulations for Training Two-Layer ReLU Neural Networksby Karthik Prakhya, Tolga Birdal, Alp YurtseverFirst submitted…
Hamiltonian Monte Carlo on ReLU Neural Networks is Inefficientby Vu C. Dinh, Lam Si Tung…
Minimax optimality of deep neural networks on dependent data via PAC-Bayes boundsby Pierre Alquier, William…