Summary of Flocora: Federated Learning Compression with Low-rank Adaptation, by Lucas Grativol Ribeiro et al.
FLoCoRA: Federated learning compression with low-rank adaptationby Lucas Grativol Ribeiro, Mathieu Leonardon, Guillaume Muller, Virginie…
FLoCoRA: Federated learning compression with low-rank adaptationby Lucas Grativol Ribeiro, Mathieu Leonardon, Guillaume Muller, Virginie…
Knowledge Distillation in Federated Learning: a Survey on Long Lasting Challenges and New Solutionsby Laiqiao…
MobileAIBench: Benchmarking LLMs and LMMs for On-Device Use Casesby Rithesh Murthy, Liangwei Yang, Juntao Tan,…
DistilDoc: Knowledge Distillation for Visually-Rich Document Applicationsby Jordy Van Landeghem, Subhajit Maity, Ayan Banerjee, Matthew…
QuantMoE-Bench: Examining Post-Training Quantization for Mixture-of-Expertsby Pingzhi Li, Xiaolong Jin, Zhen Tan, Yu Cheng, Tianlong…
Slicing Mutual Information Generalization Bounds for Neural Networksby Kimia Nadjahi, Kristjan Greenewald, Rickard Brüel Gabrielsson,…
Efficient Model Compression for Hierarchical Federated Learningby Xi Zhu, Songcan Yu, Junbo Wang, Qinglin YangFirst…
Efficiency optimization of large-scale language models based on deep learning in natural language processing tasksby…
AdaKD: Dynamic Knowledge Distillation of ASR models using Adaptive Loss Weightingby Shreyan Ganguly, Roshan Nayak,…
Characterizing the Accuracy – Efficiency Trade-off of Low-rank Decomposition in Language Modelsby Chakshu Moar, Faraz…