Summary of A Converting Autoencoder Toward Low-latency and Energy-efficient Dnn Inference at the Edge, by Hasanul Mahmud et al.
A Converting Autoencoder Toward Low-latency and Energy-efficient DNN Inference at the Edgeby Hasanul Mahmud, Peng…
A Converting Autoencoder Toward Low-latency and Energy-efficient DNN Inference at the Edgeby Hasanul Mahmud, Peng…
Ant Colony Sampling with GFlowNets for Combinatorial Optimizationby Minsu Kim, Sanghyeok Choi, Hyeonah Kim, Jiwoo…
In-context Exploration-Exploitation for Reinforcement Learningby Zhenwen Dai, Federico Tomasi, Sina GhiassianFirst submitted to arxiv on:…
Nonparametric Automatic Differentiation Variational Inference with Spline Approximationby Yuda Shao, Shan Yu, Tianshu FengFirst submitted…
Towards In-Vehicle Multi-Task Facial Attribute Recognition: Investigating Synthetic Data and Vision Foundation Modelsby Esmaeil Seraj,…
Robust Emotion Recognition in Context Debiasingby Dingkang Yang, Kun Yang, Mingcheng Li, Shunli Wang, Shuaibing…
PR-NET: Leveraging Pathway Refined Network Structures for Prostate Cancer Patient Condition Predictionby R. Li, J.…
Optimizing LLM Queries in Relational Workloadsby Shu Liu, Asim Biswal, Audrey Cheng, Xiangxi Mo, Shiyi…
GEAR: An Efficient KV Cache Compression Recipe for Near-Lossless Generative Inference of LLMby Hao Kang,…
Computational-Statistical Gaps in Gaussian Single-Index Modelsby Alex Damian, Loucas Pillaud-Vivien, Jason D. Lee, Joan BrunaFirst…