Summary of Mitigating Knowledge Conflicts in Language Model-driven Question Answering, by Han Cao et al.
Mitigating Knowledge Conflicts in Language Model-Driven Question Answeringby Han Cao, Zhaoyang Zhang, Xiangtian Li, Chufan…
Mitigating Knowledge Conflicts in Language Model-Driven Question Answeringby Han Cao, Zhaoyang Zhang, Xiangtian Li, Chufan…
ViBe: A Text-to-Video Benchmark for Evaluating Hallucination in Large Multimodal Modelsby Vipula Rawte, Sarthak Jain,…
Mitigating Hallucination in Multimodal Large Language Model via Hallucination-targeted Direct Preference Optimizationby Yuhan Fu, Ruobing…
Seeing Clearly by Layer Two: Enhancing Attention Heads to Alleviate Hallucination in LVLMsby Xiaofeng Zhang,…
LLM Hallucination Reasoning with Zero-shot Knowledge Testby Seongmin Lee, Hsiang Hsu, Chun-Fu ChenFirst submitted to…
QCG-Rerank: Chunks Graph Rerank with Query Expansion in Retrieval-Augmented LLMs for Tourism Domainby Qikai Wei,…
Evaluating the Accuracy of Chatbots in Financial Literatureby Orhan Erdem, Kristi Hassett, Feyzullah EgriboyunFirst submitted…
Seeing Through the Fog: A Cost-Effectiveness Analysis of Hallucination Detection Systemsby Alexander Thomas, Seth Rosen,…
Fine-Tuning Vision-Language Model for Automated Engineering Drawing Information Extractionby Muhammad Tayyab Khan, Lequn Chen, Ye…
VERITAS: A Unified Approach to Reliability Evaluationby Rajkumar Ramamurthy, Meghana Arakkal Rajeev, Oliver Molenschot, James…