Summary of Post-training Statistical Calibration For Higher Activation Sparsity, by Vui Seng Chua et al.
Post-Training Statistical Calibration for Higher Activation Sparsityby Vui Seng Chua, Yujie Pan, Nilesh JainFirst submitted…
Post-Training Statistical Calibration for Higher Activation Sparsityby Vui Seng Chua, Yujie Pan, Nilesh JainFirst submitted…
SafeWatch: An Efficient Safety-Policy Following Video Guardrail Model with Transparent Explanationsby Zhaorun Chen, Francesco Pinto,…
On How Iterative Magnitude Pruning Discovers Local Receptive Fields in Fully Connected Neural Networksby William…
Federated Split Learning with Model Pruning and Gradient Quantization in Wireless Networksby Junhe Zhang, Wanli…
DapperFL: Domain Adaptive Federated Learning with Model Fusion Pruning for Edge Devicesby Yongzhe Jia, Xuyun…
CPTQuant - A Novel Mixed Precision Post-Training Quantization Techniques for Large Language Modelsby Amitash Nanda,…
A Granger-Causal Perspective on Gradient Descent with Application to Pruningby Aditya Shah, Aditya Challa, Sravan…
Efficient Model Compression Techniques with FishLegby Jamie McGowan, Wei Sheng Lai, Weibin Chen, Henry Aldridge,…
Efficient LLM Inference using Dynamic Input Pruning and Cache-Aware Maskingby Marco Federici, Davide Belli, Mart…
TinyFusion: Diffusion Transformers Learned Shallowby Gongfan Fang, Kunjun Li, Xinyin Ma, Xinchao WangFirst submitted to…