Summary of Bitstack: Any-size Compression Of Large Language Models in Variable Memory Environments, by Xinghao Wang et al.
BitStack: Any-Size Compression of Large Language Models in Variable Memory Environmentsby Xinghao Wang, Pengyu Wang,…
BitStack: Any-Size Compression of Large Language Models in Variable Memory Environmentsby Xinghao Wang, Pengyu Wang,…
Robust Sparse Regression with Non-Isotropic Designsby Chih-Hung Liu, Gleb NovikovFirst submitted to arxiv on: 31…
Learning Macroscopic Dynamics from Partial Microscopic Observationsby Mengyi Chen, Qianxiao LiFirst submitted to arxiv on:…
Transformers to Predict the Applicability of Symbolic Integration Routinesby Rashid Barket, Uzma Shafiq, Matthew England,…
Quantum Deep Equilibrium Modelsby Philipp Schleich, Marta Skreta, Lasse B. Kristensen, Rodrigo A. Vargas-Hernández, Alán…
Scalable Kernel Inverse Optimizationby Youyuan Long, Tolga Ok, Pedro Zattoni Scroccaro, Peyman Mohajerin EsfahaniFirst submitted…
TrAct: Making First-layer Pre-Activations Trainableby Felix Petersen, Christian Borgelt, Stefano ErmonFirst submitted to arxiv on:…
Representative Social Choice: From Learning Theory to AI Alignmentby Tianyi QiuFirst submitted to arxiv on:…
Ada-MSHyper: Adaptive Multi-Scale Hypergraph Transformer for Time Series Forecastingby Zongjiang Shang, Ling Chen, Binqing wu,…
Breaking Determinism: Fuzzy Modeling of Sequential Recommendation Using Discrete State Space Diffusion Modelby Wenjia Xie,…