Summary of Fedcomloc: Communication-efficient Distributed Training Of Sparse and Quantized Models, by Kai Yi et al.
FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Modelsby Kai Yi, Georg Meinhardt, Laurent Condat,…
FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Modelsby Kai Yi, Georg Meinhardt, Laurent Condat,…
Adversarial Fine-tuning of Compressed Neural Networks for Joint Improvement of Robustness and Efficiencyby Hallgrimur Thorsteinsson,…
Generalized Relevance Learning Grassmann Quantizationby M. Mohammadi, M. Babai, M.H.F. WilkinsonFirst submitted to arxiv on:…
COMQ: A Backpropagation-Free Algorithm for Post-Training Quantizationby Aozhong Zhang, Zi Yang, Naigang Wang, Yingyong Qi,…
What Makes Quantization for Large Language Models Hard? An Empirical Study from the Lens of…
FrameQuant: Flexible Low-Bit Quantization for Transformersby Harshavardhan Adepu, Zhanpeng Zeng, Li Zhang, Vikas SinghFirst submitted…
The Impact of Quantization on the Robustness of Transformer-based Text Classifiersby Seyed Parsa Neshaei, Yasaman…
GEAR: An Efficient KV Cache Compression Recipe for Near-Lossless Generative Inference of LLMby Hao Kang,…
On-demand Quantization for Green Federated Generative Diffusion in Mobile Edge Networksby Bingkun Lai, Jiayi He,…
Deep-Learned Compression for Radio-Frequency Signal Classificationby Armani Rodriguez, Yagna Kaasaragadda, Silvija Kokalj-FilipovicFirst submitted to arxiv…