Summary of How Can Deep Neural Networks Fail Even with Global Optima?, by Qingguang Guan
How Can Deep Neural Networks Fail Even With Global Optima?by Qingguang GuanFirst submitted to arxiv…
How Can Deep Neural Networks Fail Even With Global Optima?by Qingguang GuanFirst submitted to arxiv…
Instance Selection for Dynamic Algorithm Configuration with Reinforcement Learning: Improving Generalizationby Carolin Benjamins, Gjorgjina Cenikj,…
Deep Learning Activation Functions: Fixed-Shape, Parametric, Adaptive, Stochastic, Miscellaneous, Non-Standard, Ensembleby M. M. HammadFirst submitted…
Adaptive Parametric Activationby Konstantinos Panagiotis Alexandridis, Jiankang Deng, Anh Nguyen, Shan LuoFirst submitted to arxiv…
Motion meets Attention: Video Motion Promptsby Qixiang Chen, Lei Wang, Piotr Koniusz, Tom GedeonFirst submitted…
Lumina-Next: Making Lumina-T2X Stronger and Faster with Next-DiTby Le Zhuo, Ruoyi Du, Han Xiao, Yangguang…
An Autotuning-based Optimization Framework for Mixed-kernel SVM Classifications in Smart Pixel Datasets and Heterojunction Transistorsby…
Optimized Speculative Sampling for GPU Hardware Acceleratorsby Dominik Wagner, Seanie Lee, Ilja Baumann, Philipp Seeberger,…
Separation Power of Equivariant Neural Networksby Marco Pacini, Xiaowen Dong, Bruno Lepri, Gabriele SantinFirst submitted…
Sigmoid Gating is More Sample Efficient than Softmax Gating in Mixture of Expertsby Huy Nguyen,…