Summary of Efficient Pruning Of Text-to-image Models: Insights From Pruning Stable Diffusion, by Samarth N Ramesh et al.
Efficient Pruning of Text-to-Image Models: Insights from Pruning Stable Diffusionby Samarth N Ramesh, Zhixue ZhaoFirst…
Efficient Pruning of Text-to-Image Models: Insights from Pruning Stable Diffusionby Samarth N Ramesh, Zhixue ZhaoFirst…
Bridging the Resource Gap: Deploying Advanced Imitation Learning Models onto Affordable Embedded Platformsby Haizhou Ge,…
Change Is the Only Constant: Dynamic LLM Slicing based on Layer Redundancyby Razvan-Gabriel Dumitru, Paul-Ioan…
Efficient Model Compression for Bayesian Neural Networksby Diptarka Saha, Zihe Liu, Feng LiangFirst submitted to…
Beware of Calibration Data for Pruning Large Language Modelsby Yixin Ji, Yang Xiang, Juntao Li,…
Continuous Approximations for Improving Quantization Aware Training of LLMsby He Li, Jianhang Hong, Yuanzhuo Wu,…
Geometry is All You Need: A Unified Taxonomy of Matrix and Tensor Factorization for Compression…
LoCa: Logit Calibration for Knowledge Distillationby Runming Yang, Taiqiang Wu, Yujiu YangFirst submitted to arxiv…
Hyper-Compression: Model Compression via Hyperfunctionby Fenglei Fan, Juntong Fan, Dayang Wang, Jingbo Zhang, Zelin Dong,…
Variational autoencoder-based neural network model compressionby Liang Cheng, Peiyuan Guan, Amir Taherkordi, Lei Liu, Dapeng…