Summary of Improving Neuron-level Interpretability with White-box Language Models, by Hao Bai et al.
Improving Neuron-level Interpretability with White-box Language Modelsby Hao Bai, Yi MaFirst submitted to arxiv on:…
Improving Neuron-level Interpretability with White-box Language Modelsby Hao Bai, Yi MaFirst submitted to arxiv on:…
Arithmetic Transformers Can Length-Generalize in Both Operand Length and Countby Hanseul Cho, Jaeyoung Cha, Srinadh…
Pruning Foundation Models for High Accuracy without Retrainingby Pu Zhao, Fei Sun, Xuan Shen, Pinrui…
Generalized Probabilistic Attention Mechanism in Transformersby DongNyeong Heo, Heeyoul ChoiFirst submitted to arxiv on: 21…
All You Need is an Improving Column: Enhancing Column Generation for Parallel Machine Scheduling via…
SEA: State-Exchange Attention for High-Fidelity Physics Based Transformersby Parsa Esmati, Amirhossein Dadashzadeh, Vahid Goodarzi, Nicolas…
LTPNet Integration of Deep Learning and Environmental Decision Support Systems for Renewable Energy Demand Forecastingby…
Amortized Probabilistic Conditioning for Optimization, Simulation and Inferenceby Paul E. Chang, Nasrulloh Loka, Daolang Huang,…
A Fast AI Surrogate for Coastal Ocean Circulation Modelsby Zelin Xu, Jie Ren, Yupu Zhang,…
Self-Satisfied: An end-to-end framework for SAT generation and predictionby Christopher R. Serrano, Jonathan Gallagher, Kenji…