Summary of Mixpe: Quantization and Hardware Co-design For Efficient Llm Inference, by Yu Zhang et al.
MixPE: Quantization and Hardware Co-design for Efficient LLM Inferenceby Yu Zhang, Mingzi Wang, Lancheng Zou,…
MixPE: Quantization and Hardware Co-design for Efficient LLM Inferenceby Yu Zhang, Mingzi Wang, Lancheng Zou,…
Sparse patches adversarial attacks via extrapolating point-wise informationby Yaniv Nemcovsky, Avi Mendelson, Chaim BaskinFirst submitted…
BadSFL: Backdoor Attack against Scaffold Federated Learningby Xingshuo Han, Xuanye Zhang, Xiang Lan, Haozhao Wang,…
Learn from Foundation Model: Fruit Detection Model without Manual Annotationby Yanan Wang, Zhenghao Fei, Ruichen…
Neural Network-based High-index Saddle Dynamics Method for Searching Saddle Points and Solution Landscapeby Yuankai Liu,…
Video-Text Dataset Construction from Multi-AI Feedback: Promoting Weak-to-Strong Preference Learning for Video Large Language Modelsby…
Batch Bayesian Optimization via Expected Subspace Improvementby Dawei Zhan, Zhaoxi Zeng, Shuoxiao Wei, Ping WuFirst…
Effective Non-Random Extreme Learning Machineby Daniela De Canditiis, Fabiano VegliantiFirst submitted to arxiv on: 25…
Efficient pooling of predictions via kernel embeddingsby Sam Allen, David Ginsbourger, Johanna ZiegelFirst submitted to…
Transparent Neighborhood Approximation for Text Classifier Explanationby Yi Cai, Arthur Zimek, Eirini Ntoutsi, Gerhard WunderFirst…