Summary of From Reading to Compressing: Exploring the Multi-document Reader For Prompt Compression, by Eunseong Choi et al.
From Reading to Compressing: Exploring the Multi-document Reader for Prompt Compressionby Eunseong Choi, Sunkyung Lee,…
From Reading to Compressing: Exploring the Multi-document Reader for Prompt Compressionby Eunseong Choi, Sunkyung Lee,…
GraphRouter: A Graph-based Router for LLM Selectionsby Tao Feng, Yanzhen Shen, Jiaxuan YouFirst submitted to…
One2set + Large Language Model: Best Partners for Keyphrase Generationby Liangying Shao, Liang Zhang, Minlong…
Better Instruction-Following Through Minimum Bayes Riskby Ian Wu, Patrick Fernandes, Amanda Bertsch, Seungone Kim, Sina…
Domain-Specific Retrieval-Augmented Generation Using Vector Stores, Knowledge Graphs, and Tensor Factorizationby Ryan C. Barron, Ves…
AVG-LLaVA: A Large Multimodal Model with Adaptive Visual Granularityby Zhibin Lan, Liqiang Niu, Fandong Meng,…
BoViLA: Bootstrapping Video-Language Alignment via LLM-Based Self-Questioning and Answeringby Jin Chen, Kaijing Ma, Haojian Huang,…
SCA: Highly Efficient Semantic-Consistent Unrestricted Adversarial Attackby Zihao Pan, Weibin Wu, Yuhang Cao, Zibin ZhengFirst…
Choices are More Important than Efforts: LLM Enables Efficient Multi-Agent Explorationby Yun Qu, Boyuan Wang,…
Intelligence at the Edge of Chaosby Shiyang Zhang, Aakash Patel, Syed A Rizvi, Nianchen Liu,…