Summary of Epsd: Early Pruning with Self-distillation For Efficient Model Compression, by Dong Chen et al.
EPSD: Early Pruning with Self-Distillation for Efficient Model Compressionby Dong Chen, Ning Liu, Yichen Zhu,…
EPSD: Early Pruning with Self-Distillation for Efficient Model Compressionby Dong Chen, Ning Liu, Yichen Zhu,…
SwapNet: Efficient Swapping for DNN Inference on Edge AI Devices Beyond the Memory Budgetby Kun…
CompactifAI: Extreme Compression of Large Language Models using Quantum-Inspired Tensor Networksby Andrei Tomut, Saeed S.…
SymbolNet: Neural Symbolic Regression with Adaptive Dynamic Pruning for Compressionby Ho Fung Tsoi, Vladimir Loncar,…
Knowledge Translation: A New Pathway for Model Compressionby Wujie Sun, Defang Chen, Jiawei Chen, Yan…
Explainability-Driven Leaf Disease Classification Using Adversarial Training and Knowledge Distillationby Sebastian-Vasile Echim, Iulian-Marius Tăiatu, Dumitru-Clementin…
FlatENN: Train Flat for Enhanced Fault Tolerance of Quantized Deep Neural Networksby Akul Malhotra, Sumeet…