Summary of Token Pruning Using a Lightweight Background Aware Vision Transformer, by Sudhakar Sah et al.
Token Pruning using a Lightweight Background Aware Vision Transformerby Sudhakar Sah, Ravish Kumar, Honnesh Rohmetra,…
Token Pruning using a Lightweight Background Aware Vision Transformerby Sudhakar Sah, Ravish Kumar, Honnesh Rohmetra,…
Prompting Video-Language Foundation Models with Domain-specific Fine-grained Heuristics for Video Question Answeringby Ting Yu, Kunhao…
ZipVL: Efficient Large Vision-Language Models with Dynamic Token Sparsificationby Yefei He, Feng Chen, Jing Liu,…
KRAG Framework for Enhancing LLMs in the Legal Domainby Nguyen Ha Thanh, Ken SatohFirst submitted…
Enhancing Performance of Point Cloud Completion Networks with Consistency Lossby Kevin Tirta Wijaya, Christofel Rio…
Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?by Fumiya…
EMMA: Empowering Multi-modal Mamba with Structural and Hierarchical Alignmentby Yifei Xing, Xiangyuan Lan, Ruiping Wang,…
Falcon Mamba: The First Competitive Attention-free 7B Language Modelby Jingwei Zuo, Maksim Velikanov, Dhia Eddine…
HiRT: Enhancing Robotic Control with Hierarchical Robot Transformersby Jianke Zhang, Yanjiang Guo, Xiaoyu Chen, Yen-Jen…
DAdEE: Unsupervised Domain Adaptation in Early Exit PLMsby Divya Jyoti Bajpai, Manjesh Kumar HanawalFirst submitted…