Summary of Compute Better Spent: Replacing Dense Layers with Structured Matrices, by Shikai Qiu et al.
Compute Better Spent: Replacing Dense Layers with Structured Matricesby Shikai Qiu, Andres Potapczynski, Marc Finzi,…
Compute Better Spent: Replacing Dense Layers with Structured Matricesby Shikai Qiu, Andres Potapczynski, Marc Finzi,…
Geometric sparsification in recurrent neural networksby Wyatt Mackey, Ioannis Schizas, Jared Deighton, David L. Boothe…
PowerInfer-2: Fast Large Language Model Inference on a Smartphoneby Zhenliang Xue, Yixin Song, Zeyu Mi,…
Decoupling regularization from the action spaceby Sobhan Mohammadpour, Emma Frejinger, Pierre-Luc BaconFirst submitted to arxiv…
Expressive Power of Graph Neural Networks for (Mixed-Integer) Quadratic Programsby Ziang Chen, Xiaohan Chen, Jialin…
Aligning Large Language Models with Representation Editing: A Control Perspectiveby Lingkai Kong, Haorui Wang, Wenhao…
Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parametersby Yixin Song, Haotong Xie, Zhengyan…
MAGNOLIA: Matching Algorithms via GNNs for Online Value-to-go Approximationby Alexandre Hayderi, Amin Saberi, Ellen Vitercik,…
Distributionally Robust Safe Sample Elimination under Covariate Shiftby Hiroyuki Hanada, Tatsuya Aoyama, Satoshi Akahane, Tomonari…
Decision-Making Behavior Evaluation Framework for LLMs under Uncertain Contextby Jingru Jia, Zehua Yuan, Junhao Pan,…