Summary of The Optimality Of (accelerated) Sgd For High-dimensional Quadratic Optimization, by Haihan Zhang et al.
The Optimality of (Accelerated) SGD for High-Dimensional Quadratic Optimizationby Haihan Zhang, Yuanshi Liu, Qianwen Chen,…
The Optimality of (Accelerated) SGD for High-Dimensional Quadratic Optimizationby Haihan Zhang, Yuanshi Liu, Qianwen Chen,…
Increasing Both Batch Size and Learning Rate Accelerates Stochastic Gradient Descentby Hikaru Umeda, Hideaki IidukaFirst…
Asymptotics of Stochastic Gradient Descent with Dropout Regularization in Linear Modelsby Jiaqi Li, Johannes Schmidt-Hieber,…
Convergence of continuous-time stochastic gradient descent with applications to linear deep neural networksby Gabor Lugosi,…
Dynamic Decoupling of Placid Terminal Attractor-based Gradient Descent Algorithmby Jinwei Zhao, Marco Gori, Alessandro Betti,…
DynamicFL: Federated Learning with Dynamic Communication Resource Allocationby Qi Le, Enmao Diao, Xinran Wang, Vahid…
NGD converges to less degenerate solutions than SGDby Moosa Saghir, N. R. Raghavendra, Zihe Liu,…
Approximating Metric Magnitude of Point Setsby Rayna Andreeva, James Ward, Primoz Skraba, Jie Gao, Rik…
Introduction to Machine Learningby Laurent YounesFirst submitted to arxiv on: 4 Sep 2024CategoriesMain: Machine Learning…
Bootstrap SGD: Algorithmic Stability and Robustnessby Andreas Christmann, Yunwen LeiFirst submitted to arxiv on: 2…