Summary of High Confidence Level Inference Is Almost Free Using Parallel Stochastic Optimization, by Wanrong Zhu et al.
High Confidence Level Inference is Almost Free using Parallel Stochastic Optimizationby Wanrong Zhu, Zhipeng Lou,…
High Confidence Level Inference is Almost Free using Parallel Stochastic Optimizationby Wanrong Zhu, Zhipeng Lou,…
MADA: Meta-Adaptive Optimizers through hyper-gradient Descentby Kaan Ozkara, Can Karakus, Parameswaran Raman, Mingyi Hong, Shoham…
Bridging State and History Representations: Understanding Self-Predictive RLby Tianwei Ni, Benjamin Eysenbach, Erfan Seyedsalehi, Michel…
Stochastic Subnetwork Annealing: A Regularization Technique for Fine Tuning Pruned Subnetworksby Tim Whitaker, Darrell WhitleyFirst…
X Hacking: The Threat of Misguided AutoMLby Rahul Sharma, Sergey Redyuk, Sumantrak Mukherjee, Andrea Sipka,…
Boosting Gradient Ascent for Continuous DR-submodular Maximizationby Qixin Zhang, Zongqi Wan, Zengde Deng, Zaiyi Chen,…
Exploiting Inter-Layer Expert Affinity for Accelerating Mixture-of-Experts Model Inferenceby Jinghan Yao, Quentin Anthony, Aamir Shafi,…
GD doesn’t make the cut: Three ways that non-differentiability affects neural network trainingby Siddharth Krishna…
CycLight: learning traffic signal cooperation with a cycle-level strategyby Gengyue Han, Xiaohan Liu, Xianyue Peng,…
Efficient and Mathematically Robust Operations for Certified Neural Networks Inferenceby Fabien Geyer, Johannes Freitag, Tobias…