Summary of Svd-llm: Truncation-aware Singular Value Decomposition For Large Language Model Compression, by Xin Wang et al.
SVD-LLM: Truncation-aware Singular Value Decomposition for Large Language Model Compressionby Xin Wang, Yu Zheng, Zhongwei…
SVD-LLM: Truncation-aware Singular Value Decomposition for Large Language Model Compressionby Xin Wang, Yu Zheng, Zhongwei…
Differentially Private Knowledge Distillation via Synthetic Text Generationby James Flemings, Murali AnnavaramFirst submitted to arxiv…
Model Compression Method for S4 with Diagonal State Space Layers using Balanced Truncationby Haruka Ezoe,…
FinGPT-HPC: Efficient Pretraining and Finetuning Large Language Models for Financial Applications with High-Performance Computingby Xiao-Yang…
PromptKD: Distilling Student-Friendly Knowledge for Generative Language Models via Prompt Tuningby Gyeongman Kim, Doohyuk Jang,…
Bayesian Deep Learning Via Expectation Maximization and Turbo Deep Approximate Message Passingby Wei Xu, An…
A Survey on Transformer Compressionby Yehui Tang, Yunhe Wang, Jianyuan Guo, Zhijun Tu, Kai Han,…
L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Modelsby Hyesung Jeon, Yulhwa Kim, Jae-joon KimFirst…
Fed-CVLC: Compressing Federated Learning Communications with Variable-Length Codesby Xiaoxin Su, Yipeng Zhou, Laizhong Cui, John…
Faster and Lighter LLMs: A Survey on Current Challenges and Way Forwardby Arnav Chavan, Raghav…