Summary of You Only Use Reactive Attention Slice For Long Context Retrieval, by Yun Joon Soh et al.
You Only Use Reactive Attention Slice For Long Context Retrievalby Yun Joon Soh, Hanxian Huang,…
You Only Use Reactive Attention Slice For Long Context Retrievalby Yun Joon Soh, Hanxian Huang,…
Column Vocabulary Association (CVA): semantic interpretation of dataless tablesby Margherita Martorana, Xueli Pan, Benno Kruit,…
Should RAG Chatbots Forget Unimportant Conversations? Exploring Importance and Forgetting with Psychological Insightsby Ryuichi Sumida,…
Investigating Context-Faithfulness in Large Language Models: The Roles of Memory Strength and Evidence Styleby Yuepei…
SFR-RAG: Towards Contextually Faithful LLMsby Xuan-Phi Nguyen, Shrey Pandit, Senthil Purushwalkam, Austin Xu, Hailin Chen,…
A RAG Approach for Generating Competency Questions in Ontology Engineeringby Xueli Pan, Jacco van Ossenbruggen,…
MemoRAG: Moving towards Next-Gen RAG Via Memory-Inspired Knowledge Discoveryby Hongjin Qian, Peitian Zhang, Zheng Liu,…
Creating a Gen-AI based Track and Trace Assistant MVP (SuperTracy) for PostNLby Mohammad ReshadatiFirst submitted…
Benchmarking Cognitive Domains for LLMs: Insights from Taiwanese Hakka Cultureby Chen-Chi Chang, Ching-Yuan Chen, Hung-Shin…
AdaComp: Extractive Context Compression with Adaptive Predictor for Retrieval-Augmented Large Language Modelsby Qianchi Zhang, Hainan…