Summary of Lqer: Low-rank Quantization Error Reconstruction For Llms, by Cheng Zhang et al.
LQER: Low-Rank Quantization Error Reconstruction for LLMsby Cheng Zhang, Jianyi Cheng, George A. Constantinides, Yiren…
LQER: Low-Rank Quantization Error Reconstruction for LLMsby Cheng Zhang, Jianyi Cheng, George A. Constantinides, Yiren…
Bi-CryptoNets: Leveraging Different-Level Privacy for Encrypted Inferenceby Man-Jie Yuan, Zheng Zou, Wei GaoFirst submitted to…
Class incremental learning with probability dampening and cascaded gated classifierby Jary Pomponi, Alessio Devoto, Simone…
Spiking CenterNet: A Distillation-boosted Spiking Neural Network for Object Detectionby Lennard Bodden, Franziska Schwaiger, Duc…
Addressing Bias Through Ensemble Learning and Regularized Fine-Tuningby Ahmed Radwan, Layan Zaafarani, Jetana Abudawood, Faisal…
EPSD: Early Pruning with Self-Distillation for Efficient Model Compressionby Dong Chen, Ning Liu, Yichen Zhu,…
Scavenging Hyena: Distilling Transformers into Long Convolution Modelsby Tokiniaina Raharison Ralambomihanta, Shahrad Mohammadzadeh, Mohammad Sami…
TQCompressor: improving tensor decomposition methods in neural networks via permutationsby V. Abronin, A. Naumov, D.…
Large Language Model Guided Knowledge Distillation for Time Series Anomaly Detectionby Chen Liu, Shibo He,…
Communication-Efficient Federated Learning through Adaptive Weight Clustering and Server-Side Distillationby Vasileios Tsouvalas, Aaqib Saeed, Tanir…